Foundry Live 2021 | Key Forces and Technologies Driving Change in the VFX Industry

Foundry Live 2021 | Key Forces and Technologies Driving Change in the VFX Industry

Show Video

- [Joyce] Hi, everyone. Welcome and thank you for joining Foundry Live, Key Forces in Technologies Driving Change in the VFX industry. My name is Joyce, Industry Marketing Manager here at Foundry, and I hope you're doing well, and staying safe wherever you are in this world. If you are new to our virtual event series, welcome, and I hope you enjoy today's webinar. I wanted to run through just a few updates and some housekeeping items.

This session will be recorded and will be available straight after today's webinar. And the session will also end with the Q&A, so if you have any questions, please add them to the questions tab on the right-hand side of your screen. I'd like to take this moment to thank our sponsors, Lenovo and AMD. We are working with Lenovo and AMD and our other partners as part of our new program to test selected third-party workstations across all of our products. Thanks to our sponsors, we are excited to give away a ThinkStation P620, a Lenovo and AMD Threadripper system, which includes a Nvidia Quadro RTX 5000 to one lucky attendee.

So to be eligible for this giveaway, you would need to opt in upon registration. If you didn't opt in when you registered, that's totally fine. Just make sure you've registered to one of the upcoming two Foundry Live events, and just make sure you opt in for that option, and you'll be slotted into the competition. We started Foundry Live last summer as a way to connect with you virtually and share product updates and releases. This year we are really excited to come back for part two, and our schedule includes Modo 15 release, which is due next week. And then we will wrap up Foundry Live with an Education Summit at the end of next week.

So for more information and to register please visit our Livestorm page or our Foundry events page. This year we're also taking part in several industry virtual events. We recently participated in the HPA Tech Retreat and Spark FX. But be on the lookout for Foundry at the virtual FMX GTC RealTime Conference and SIGGRAPH. I'd like to take this opportunity to mention that Foundry is actively involved with the Academy Software Foundation, as we use many of their open source projects that are widely used in production. We have representation on the board, the outreach committee, and are actively involved in the technical advisory council at various working groups.

So to be successful the Academy Software Foundation needs community members like you to contribute code and get involved and help spread the word about the foundation. So visit their page and find out how you can get involved with them. Foundry has many learning resources available on our learn page including tutorials, developer documentation, and release notes for all of our products. As part of our ongoing initial to provide valuable learning resources, Foundry is really excited to launch the Color Management Fundamentals and ACES workflow training created by Victor Perez and Netflix. For more information on the training material join our upcoming webinar on March 31st. Also, just to give you a heads up, we've got some really, really cool tutorials on CopyCat and machine learning as well.

So definitely make sure you check out our learn page. Stay connected with Foundry by subscribing to our YouTube channel. Check our Insights Hub as well where you can find a lot of our case studies, articles, and industry trends or follow us on social media. I'd also like to give a massive shout out to Daniel Smith who has recently launched his book, "Nuke Codex" and we are giving away another three books for free today. So again, no actions required from any of you guys.

We will pick winners from the audience, and we will send you an email if you are the winner. So stay tuned and keep an eye on your emails. So with that, thank you all for taking the time to join the webinar today. I hope that you enjoy it. And if you have any feedback, please feel free to send us an email on virtual.events@foundry.com. And now I'm going to hand over to our speakers.

Welcome. - [Matt] Hi everyone and welcome. I'm delighted to be able to talk today about some of the major disruptions that are going on in the VFX world, especially those that are driven by the rise of new technologies. I'm Mat, and I'm the Director of Product for New Technologies here at Foundry. And I'm joined today with my colleague, Dan Ring, who is our Head of Research.

So Dan and I work in Foundry's research team. The research team at Foundry is a special group of people. We're made up of machine learning experts, virtual production practitioners, developers who've built and operated cloud services, and a whole lot more. I like to think of the research team as kind of like the Q division in a James Bond movie, where we create all the gadgets that are used before a mission. In reality, what we really do is we develop ideas and experiments around new technologies that are emerging in production.

And our method is to do this by working closely with customers, with partners in the industry and academia, but most importantly by working really closely with Foundry's product teams, because at the end of the day, our mission is to bring new capabilities that enhance creative workflows for artists like you. So today we're gonna talk about what's going on in Foundry research in the context of how the world is changing. Well, let's have a look at some of the big forces at play that are disrupting the way we work. So first, due to COVID, attitudes towards remote work have changed dramatically. Many of the things that we do today, like remote review meetings or working on shots from the comfort of our home.

I mean, these are things that would have been frowned upon as little as two years ago, but now we're on the cusp of passing through the stage of working this way out of necessity. We've now realized efficiencies and advantages from working remotely. The normalization of working from home has huge implications about how creative workflows and tools need to adapt. And we're really excited about the possibilities there. We're also seeing a gradual and steady increase in cloud adoption.

Intuitively one would think that the COVID crisis would be a big accelerator for the shift to cloud, but this tech trend is really much more nuanced, and the major implications of how cloud is gonna radically change the way we work are still being realized. And virtual production is a technique that's made a huge splash recently, and it's clear that it's here to stay. It seems like everywhere you look there's a blog post about LED screens being used in onset or game engines being an important part of the asset creation process. And we think what underlies virtual production is something bigger than the excitement of "The Mandalorian" or Unreal.

It's really about a new way of working that's going to disrupt the entire production process. And finally, at a high level, it's just more, more episodic content, more streaming services, more high quality visual effects everywhere, and that means more people, more efficiency, more throughput. And the question is how are we gonna meet all of the challenges and opportunities that are stimulated by these trends? What are the new technologies and the new ways of working that need to come into our day to day in order to create content in the future? So this is what we're focused on in Foundry research. So let's break this down a little bit more. We believe that working from home, and more widely enabling globally distributed productions, requires, above all else, the ability for data to flow, and getting the right data to the right people in the right place needs to be automatic, needs to be secure, and needs to be fast.

And without this efficiencies from working in a more flexible manner will just never be fully realized. And second, compute power needs to be available to anyone that needs it. As with our phones, our creative tools can be amplified by tapping into power that lives beyond their physical constraints. Being able to dial power up and down, for example, through the cloud, needs to become a natural part of how people realize creative experiences. Virtual production is about a lot more than the technology of LED volumes.

What's happening is a blurring of the lines between all of the traditional phases of content production. Most especially the lines between onset and post is fainter than ever. And with all of these walls coming down everything starts to look more like a bunch of tight loops with real-time feedback, rather than the traditional linear pipeline that we would draw to explain the production process. And finally, we need to do more. Tools that we need to unlock big leaps in productivity and speed and iteration time need to come to fruition.

We can only throw more people at the rising demand for content for so long. And we all want tools that let us do more in less time. And of course, this is nothing new, but with the demand for content rising so quickly, we need to release some steam from the pressure cooker. In the end, what all of this comes down to is really simple. It's all about more scale and efficiency.

We need to scale people, pipelines, data, and power, and we need to create more efficient ways of working that get more done in less time. So how do we do this? Well, we believe that there are three key technologies that are gonna blaze a path for the future. So first, machine learning is going to empower artists to do a lot more in a lot less time while keeping total control over the creative process. And second, real-time workflows are gonna blur the lines between traditional production processes.

But more importantly, they're gonna rewire the relationships between individual creators and how they interface with their tools. And finally, in order to realize everything above at scale, data needs to flow. Global collaborative teams that are using real-time tools are gonna create a virtual circle, but only if they can change their data predictably. The machine learning needs to inhale massive amounts of data and bring it to where it can be crunched at scale. Machine learning, real-time workflows, and distributed data. So what we'll do today is we're gonna dive into each one of these in more detail and talk about what we're working on.

So, Dan, why don't you get us started with machine learning? - [Dan] Thanks, Matt. Hi, I'm Dan Ring. I'm happy to be kicking off our deeper dive into the first of our topics, which is machine learning. Machine learning is obviously a very hot topic at the moment, particularly for visual effects.

Recently, there have been a number of very impressive use cases for machine learning and visual effects. And this section, I'm gonna explain how we see machine learning being used for visual effects, both some things that are being released in now in Nuke as well as where we see machine learning being used in visual effects in the future. So let me take you back for a minute and look at the ML Server. Back in 2019, we released the ML Server as a project to help with the experiment with getting the power of machine learning into Nuke, into artist's hands.

Now while this was great for experimenting with, it turned out not to be suitable for production, but we still wanted to deliver that same power of machine learning into artists' hands. So that's where we came up with the idea of CopyCat. CopyCat is a tool for Nuke that allows an artist to train a neural network to create their own sequence-specific effect. The artist gives the tool a small set of before and after example images, and CopyCat learns to replicate that transformation process across the rest of the sequence. Essentially CopyCat puts AI into the hands of the artist's imagination.

Now you may have already seen this example in the Foundry Live Nuke 13.0 session, but let me show you how easy it is to get going with CopyCat. Let's start by creating the CopyCat node in Nuke. In this example we want to Roto this guy from the desert background. So we start by selecting six key frames to Roto. And see here that we've loosely Rotoed the guy from the background.

We then feed those six key frames into the CopyCat node and we click start training. And you can see in this contact sheet here what the CopyCat node is doing during the training session. So training can typically take a while, anywhere from minutes to hours. So we recommend that you go have some lunch. And when you come back from lunch, CopyCat will have figured out how to isolate that guy from all of the other frames in this shot.

Now let me share with you some more exciting examples of what you can do with CopyCat. In the last example we showed how, if you supplied loose Roto as the input to CopyCat, then the output from CopyCat would correspond to something that was like loose Roto. The same is true with a higher quality inputs. So in this example, we've paid more attention to the edges of the runner here when creating the Roto in key frames.

And you can see here that the corresponding output is of a much higher quality. You can see that it really captured the edges of the runner quite well here. And like the previous example, this shot was obtained using only a handful of key frames and training for a relatively short amount of time. This next example is about using CopyCat for digital beauty work.

Here we want to use CopyCat to remove the graze from her elbow. And as before, we select a handful of key frames, I think in this case about 11 were used, and we paint them out by hand. Now the interesting thing about this shot is that the lighting on her elbow changes quite severely over the course of this thousand frame shot. So it's incredibly impressive that CopyCat is able to pick up on that variation and return back clean, ungrazed elbows for the entirety of the shots, all without the use of any tracking whatsoever. In this example, we've used CopyCat to remove the bruise from under this man's eye. The two impressive things about this are, firstly, the detail of the skin is preserved from the before on the left and the after image on the right.

The second impressive thing is that the results of the shot were obtained by using only two key frames with no additional tracking or compositing. This really starts to demonstrate the power of CopyCat for everyday tasks. This example of digital beauty work is slightly more complicated. Here we're removing this person's beard.

And as before we pick a handful of frames. In this case I think there was 11, and we paint out the beard in each of those frames. The difference here is that the training time for this was significantly longer than the other examples. So this one had to be run overnight.

We recommend this was largely due to the variation in light and color, and also movement of the person's jaw. Now, the really cool thing about this though, is that, finally, I wanted to show an example of how to use CopyCat to fix fundamentally broken shots. For example, on the left here, we have a shot that repeatedly goes in and out of focus. That's completely unusable.

Here we looked through the shot to find patches that were in focus in one frame and then out of focus in another frame. And then we supplied those patches, about 10 or 11 or so, to CopyCat, and then asked CopyCat to figure out the transform from out of focus into focus, and what that did was created a filter that allowed you to bring everything back into a more sharp picture like the one that you're seeing here on the right. So this would allow you to recover a shot that was previously completely unusable. So that's a flavor of CopyCat, one of the tools inside the machine learning tool set released as part of Nuke 13.0. If you wanna learn more about what's inside Nuke 13.0, check out the Empowering Artists With Nick 13 Foundry Live session.

Another thing I wanted to talk about was the future of machine learning for visual effects. One lesson we learned with the ML Server was that we didn't want to be the gatekeepers of any of this machine learning technology. The field moves on so quickly and the field itself is very democratized. It's very easy to pick up machine learning frameworks and tools and start putting things together and building your own models. So we wanted to make sure that anybody could come along with their own models and use them natively inside our own software. So when we think about it, we started thinking, well, what does the future look like? So imagine five years from now where does machine learning fit in the grand scheme of visual effects software? Well, we see machine learning as being actually one of the key components to extensibility.

So if you imagine back in the very beginnings of visual effects software, they would have been quite monolithic. And then along came C++ SDKs that allow you to write your own plugins and build to kind of high-performance frameworks to do certain specific tasks that you can drop into your host application. And then along came Python scripting, which allowed you to automate very complex tasks, and most importantly help you integrate visual effects software into pipelines. And we believe that providing the frameworks for supporting machine learning software natively inside the host application, that opens up a whole world of extensibility. It means you can take cutting edge models from research or your own facility, or any sort of GitHub repo, and deliver them natively into your host VFX software. We believe that this is the eventual future of machine learning for visual effects, but it's not just the far off future.

You can already start to do that today for some limited models. So as part of Nuke 13.0, we also shipped the Inference node which allows you to natively drop in your own machine learning models and use them performantly and reliably within Nuke.

The last project that I wanted to call out that we're working on is one called SmartROTO. The goal behind SmartROTO, which is a funded research project between University of Bath, DNEG, and Foundry, is essentially to speed up Rotoscoping. The goal here is not to replace artists. In fact, it's about making sure that artist has full control over the Rotoscoping process.

In particular, we aim to interpolate shapes better using machine learning. Crucially, it's about feeding back all tweaks to improve the system. The SmartROTO project has yet to conclude, but I did want to tease a couple of example pictures of what the system can produce. So here at the top you can see the two key frames that we're supplying to the system, at frames 1001 and 1020.

On the bottom left you can see the shape as predicted by the SmartROTO system at key frame 1010. If we look at the grant difference against the ground-truth on the right-hand side you can see that it's pretty similar. So we can say that this is about the same shape as an artist would have drawn for this given key frame.

I'm looking forward to sharing more results in the SmartROTO project later on in the year. And so to present a few things that we're doing in machine learning for visual effects today and also painted a picture about where we think it's going for the future, I did just wanna wrap up and say why we think machine learning for visual effects interesting and what we've learned so far. The main reason is that machine learning allows you to solve harder problems than you previously they could solve before. For example, deep fakes or the kinds of high quality, super resolution results we've seen to date. Doing this squashes repetitive tasks, which gets you to being creative sooner, faster. And all of this is about driving efficiency, allowing you to work on larger numbers of shots at a higher quality.

Ultimately, what we want to do is assist and not replace artists. There's another topic that dramatically drives scale and efficiency within visual effects and animation, and that's the topic of real-time workflows. Now, we've all heard about real-time technologies, such as game engines, renderers, and even live motion graphics. But we spent a lot of time thinking about what actually is real time and what are we trying to optimize? Let me give you an example. Here's a very simplified example of the post-production pipeline between the Lighting and Comp Departments.

Imagine you're a lighter and you start lighting. You start here on the left-hand side. You then work across and then you start improving your work. You then decide, do I need to keep working on my work? And if yes, you make some changes, and then you reevaluate whether you need to continue.

If you no longer need to continue, you pass your work over to Comp as you start moving towards the right. The Comp department might be happy with your work, in which case it goes onwards. Otherwise, they might request a change. And in that case, it's kicked back to lighting.

We then go back to our improved lighting box in the Lighting Department. We evaluate do we need to make more changes? And if we're happy then, again, we pass it back to Comp. And maybe this time Comp is happy with the changes, it works for them, and then it gets passed down. And eventually it gets to the client where it's reviewed, and if the client's happy with the work then obviously they accept it and everybody's happy, but otherwise, if any changes are needed to be made, then, again, that either gets kicked back to Comp or likely it has to go back to lighting again. And we go back to our improved lighting box and again go back through the pipeline. So the goal here is improving the efficiency of this pipeline.

There are three ways we wanna do that. The first is reducing the time to iterate on one shot. That's the orange loop. That's that the artist, the lighter, who's working on that one shot. The second is to reduce the number of external iterations, the red loop.

That's when other departments or the client require you to make a change where you have to go all the way back to the beginning of the pipe. And the third is about increasing the number of concurrent shots that you're working on. That's the blue dash loop, and this is all about boosting your throughput. These are some ways we can improve a pipeline scale and efficiency. Now, what does an ideal pipeline look like? On the right here is an imagining of a modern real-time pipeline.

There's some key differences here between this pipeline and the previous one. The first is that all departments have access to all of the data on the pipeline, shown here by the pink box and the pink arrows. The second is that all relevant stakeholders have input into the decision as to whether an artist needs to improve their work or not. Now, obviously this doesn't mean that the client needs to be involved in every artist decision; however, it does mean that a key stakeholder in one department is aware of what's happening in the departments around them.

Once you've done this, some very interesting things are possible. The first is that an artist can always be working in the context of the final image. Let me give you an example.

Imagine the Comp Department has already put together a stock comp of how they think that the shot should look. That means that a lighter can come along and start doing their lighting work, but also feed that work forwards into kind of an initial pass at the Comp. And they can see the effect of their work in the Comp.

That means that they're always seeing the final image or the current version of the final image, and they can make very informed decisions. This ultimately means that they'll get to their finished shot sooner. So that was all about closing the orange loop, their own iteration loop, sooner. The second is that all stakeholders are always in the loop.

So this means that somebody in one department is aware of what's going on in the others, and it should mean fewer kickbacks or fewer revisions. So this was all about reducing that red loop. And lastly, it also means less waiting on other departments. Again, boosting efficiency and raising throughput.

So this is the optimal structure for a modern real-time pipeline. How do we build it practically? What do we need? Well, we've identified three things. The first are enabling real-time technologies. These are the things you commonly think of when you hear of real-time, so things like renderers and game engines, and these are the things that optimize the artist's orange iteration loop. The second is reduced pipeline friction, and by this we mean better interop between your DCC packages.

This allows different departments to work and collaborate more effectively. The third is pipeline standardization. This is ensuring that the data that lives in the pipeline or that moves between the pipeline and the various departments is to have a very standardized open format. So particular examples being USD, Hydra, OTIO, and others.

So now we've set the scene for all of the things that you need in order to enable a real-time pipeline. So what have we been building at Foundry to help you with all this? Well, I'm delighted to announce Genio, our Nuke to Unreal interop bridge. To use Genio you start by launching the server. This allows Nuke to then connect to the Unreal session, and then start pumping into the map and sequences.

You can see here we've switched to the master sequence. One of the most important features about Genio is the ability to keep your scene live, or somewhat alive. This means that you can edit your Unreal scene, and then when you're ready, you can go back to Nuke and click fetch latest. This will look for any changes that you've made and start pulling them through into Nuke. You also get a lot of options when it comes to render passes. You get the beauty pass opposite, but also utility passes like positions, normals, depth, and everyone's favorite Cryptomatte, as well as many, many more.

You can also control many of the advanced settings of the Unreal render from the property panel inside Nuke. In Nuke it's also possible to create a camera that is a live reflection of the Unreal camera. This is the beginning of some really interesting workflows, particularly around those of projection work and digital map painting.

I'll start to show some of these later on. As I mentioned before, Genio allows you to pull Cryptomatte data over the Unreal bridge. This allows you to use the Cryptomatte gizmos within Nuke to start picking out and isolating objects and layers, for a specific compositing work. To conclude this Genio demo I wanna show an impressive example that clearly demonstrates how to work efficiently and at scale. And that is working across multiple shots at the same time.

And because we have our camera on our position passes, we can create a color volume, where any points within a certain radius of this volume have their color changed. In this case, we're changing the egg and its surrounding areas to green, for all shots in this sequence. And lastly, I'd like to thank Weta and Epic Games for this fantastic meerkat scene. The next piece of work around interop that I wanna present is that between lighting and comp.

Two sister disciplines where you often have lighting and compositing packages, Katana and Nuke, working side by side. This video was also shown in the previous Nuke and looked at in Lighting Foundry Live sessions, in case it looks familiar. In this video we have a live render coming from Katana, generated by the Foresight Rendering system introduced in Katana Four.

Pixels as they're rendered are being streamed live into Nuke where we have a relatively simple composite here with a color correct and ZDefocus. Pixels are then streamed out of Nuke, back to Katana, where a catalog entry is created and can be seen in the monitor. As a lighter, you're working more directly with the final composite, but all within Katana. As a lighter, you're working more directly within the context of the final composite. Again, the idea here is about reducing the number of kickbacks, or round trips, where you have to go back to an earlier stage in your pipeline. Now, again, this is still the very early proof of concept and the team have plenty of ideas on how to improve the interop between Nuke and Katana.

So watch this space. Now changing tack slightly, but still keeping well within the themes of scale and efficiency, I wanna turn our attention now to a very popular real-time workflow that's taking the production world by storm, and that is, of course, virtual production and in-camera VFX. Virtual production and in-camera VFX have seen a meteoric rise in their use over the last couple of years, having empowered filmmakers to create some phenomenal works of art. In this talk, we're not gonna present any pretty pictures or extol the virtues of why virtual production is so important as a filmmaking tool. So many other talks have done this far better than we could ever hope to. Instead, we're gonna focus on two questions that have really been challenging us lately.

The first is how does this change the roles for VFX in filmmaking? And secondly, what are the biggest future challenges? Now to answer how the roles change for VFX in virtual production scenarios, it's actually quite hard to find quantifiable data, but the best thing that we found so far in helping us explore this is actually looking to IMDB. The idea is to look at the VFX crew count between productions that have used a lot of in-camera VFX versus productions that are probably used less or even no virtual production. Now it turns out it's actually very hard to compare apples and oranges. So the best we've come up with for this was comparing "Star Wars" episodes eight and nine against "The Mandalorian," which was obviously very high profile, high quality production using a lot of in-camera VFX. What we're looking to spot here is that the composition or makeup of these VFX teams differs depending on whether they used in-camera VFX or not. Now, before we dig any deeper I do wanna obviously caveat this and say that this data has come from IMDB, and we're only measuring the difference in the number of VFX cast credits.

It doesn't mention anything about how long a credit has worked on the film or anything else. So please do take all of this with a very healthy dose of salt. Now, looking at the three productions like this, you can see that the composition are roughly the same across all three, but let's normalize the data so that we can see where the differences lie. With the data normalized, you can start to see that, yes, generally across the three productions, the composition and makeup of the teams is largely similar, except for two things that have stood out. The first of which is, in general, the percentage of compositors remained largely the same across the three productions. This is interesting, as one of the selling points of in-camera VFX is the reduced need for compositing.

Now, of course, we can't know when these compositors came on, whether they were involved before the shoot or after the shoot, but still it's interesting to know that the proportion of composite remained largely similar across their productions. Now I'm gonna put a pin in that for the moment and come back to it shortly. The second thing that stood out was the dramatic increase in the number of cast credits associated with asset prep.

In particularly, we saw a rise of about 40%. Now I'm gonna put another pin in this, and I'm gonna remind everybody that roughly the number of compositors hasn't changed, yet the number of crew required for asset prep has increased. Now, I wanna dig in a little bit more about data, and particularly the roles onset who are generating data and where that data goes. So we've spent some time digging into the tasks and operations that happen as part of a virtual production, the roles and people who do them, as well as the data that's generated and where it's supposed to end up. The latter of which is shown in this simplified color coded diagram here.

In purple, you can see the data that's created by the virtual art department, either in advance or as part of the prep. In red you can see the necessary data that's generated for the virtual production shoot, either as part of prep, in advance of the shoot on set, or during the shoot itself. And in yellow is the data generated that's required or would be very useful to have for post-production later on. For example, camera tracks, motion capture, LIDAR scans of the set, lens profile information, asset IDs, lighting information, and more. Let's focus in on the data that's either required or would be very useful to have for post-production.

You can see that there's a number of yellow boxes here. And one of the common complaints that we hear about is the fact that very little of this data ever makes it to post. In fact, typically once the director says cut, the majority of this data never ever makes it to post. And this is one area we'd really like to address. We see this work as being a core part of pipeline standardization, and it's never been more important than it is right now. So how do we propose solving this? Well, in a nutshell, it's about conforming everything to a single timeline of truth.

So that is any data that's generated has a timestamp should be able to be conformed to a single hero timeline. And that becomes a single source of truth for the entire production. And not only that, but we use very open standards for how we describe and conform this data to this timeline. For example, in this mock-up, I'm showing how we're conforming the USD kitchen scene onto a timeline with also audio and video on it. This timeline is described by OTIO, which means that it can easily be moved to editorial and back again, in a very open way.

As mentioned, we wanna make sure that any data with a timestamp can be conformed. Now we already do this with video and audio and have demonstrated how you could do it with USD, but there are many other rich data types that we also wanna make sure that we capture, such as lens metadata, MOCAP, lighting information, object tracks, camera tracks, and lots more. The goal of the Timeline of Truth isn't just to conform it and just keep it on a timeline.

The idea is to make sure that that data then gets to the people who need it, either to editorial to review, or in, particularly, to the artists who have to work with it in post. This is where powerful standards like USD, Hydra, OTIO, and more, all come into play, and help improve the efficiency of their post-production. Now I'm gonna return to those points I made on asset prep and compositing earlier and relate them to real-time workflows. Starting with asset prep, one of the biggest challenges in virtual production is obviously getting all of your assets ready in time for the shoot, but not just that.

They need to be in the right format, and they need to be optimized for use on set. - This is clearly a hard problem, and likely why the number of people involved in asset prep is higher than non-virtual productions. This really underscores the importance of using open standards and also ensuring that TCC apps talk very plainly and easily between one another. When all of these elements are working together you'll have a highly effective real-time workflow. And now onto compositing.

For in-camera VFX, we've seen the incredible power of being able to get something very close to the final image there and then, and be able to take it home that day. And that's phenomenal for the filmmaking process. However, in many cases, it may not always be possible to take the shot home that day. It may need more work. And that's why the volume of compositing is still quite high for in-camera VFX shoots.

In many cases, it's due to common and traditional compositing tasks, but we're also seeing the rise of new challenges. Color is a big challenge, not only in maintaining a consistent color pipeline, but in matching colors between the physical set on the LED walls. There are also physical effects of the wall seen through the camera, such as moire or the effects of reduced light from glancing angles. Additional rigs like tracking lighthouses also need to be removed.

And last, but certainly not least, everyone's favorite, Roto. Because we're no longer using green screens, everything needs to be Rotoed. Now this shouldn't deter anybody from doing in-camera visual effects. In fact, these are all solvable problems.

But one thing that can dramatically improve the efficiency of this work and reduce the costs is by doing things like having more efficient real-time workflows and real-time pipelines, that is better data sharing and delivery of data that's captured on set into post. We also see this as a core area where machine learning tools like CopyCat can be used to dramatically improve the efficiency of the types of compositing tasks for virtual production. We're very interested in exploring this further.

To wrap this up, I wanna say that real-time workflows are more than just fast renders. For effective real-time pipelines, you also need to make sure that you're reducing your pipeline friction and that your pipeline is built on open standards. We also believe that we're only seeing the beginning of what's possible with in-camera VFX.

Now we've begun some exciting work in the area, and I'm hoping that we can share some of it later in the year. And finally, we believe that real-time is here to stay and will become the de facto way that you work, in that way, real-time workflows are simply going to be known as workflows. Now, let me hand you back to Matt for the final topic on distributed data. - [Matt] Thanks, Dan.

Finally, let's talk about how data needs to be managed in the context of these emerging workflows. In a world of real-time workflows, globally distributed productions, and machine-assisted tools, connecting people to data is a critical problem to solve. The question is, how do we scale for more performance, power, and flexibility while doing this securely and maintaining all of the connections between the things so we don't end up worse than where we started? In order to get our head around this, let's have we'll look at how data moves through the pipeline in the past, the present, and hopefully in the future. So this is a simplified view of how data moved around before the pandemic. VFX studios would ingest plates, usually from a DI house, and then iterate on shots until they turn over renders. Wrangling all of this data is kind of a hassle.

There's usually stuff missing, things get added or removed along the way, and a lot of the management for this work falls on the shoulders of data wranglers. But more critically, the security aspects of this process are mainly physical and IT related. Now this way of working is also hard to scale. Every studio works a little bit differently, so it's hard to establish a single data management process across the spectrum of, say, an entire show. It's also not very transparent. Data's flying around on portable disks, and there isn't a central digital chain of custody.

And then the pandemic hit. And suddenly many of us are working from home. And studios responded by rapidly deploying remote desktop technologies like Teradici. And you've probably worked with one of these systems in the last year. In a way, this saved the day and it kept the wheels turning.

But in another sense, it wasn't a fundamental transformation of how we work, and it even kind of introduced a few new problems. For example, the studio's inbound internet is now a business critical bottleneck, and potentially a single point of failure. But hey, we saved a commute, which is great. And some would say we've even been a bit more productive working from home in some cases.

But, this isn't the force multiplier that we've been promised from all of this cloud talk. Well, just before the pandemic hit a think tank called Movie Labs wrote a highly influential paper called "The Evolution of Media Creation." I highly recommend you give it a read. It laid out a bold vision for how moving to the cloud was not only going to happen, but also outlined some of the efficiencies that we would be able to achieve by doing this. Movie Labs set out a bold target of 2030 for this transformation to happen. And several major studios are already in the process of making the 2030 vision a reality.

But there are tough challenges for this pure cloud approach. Trust me, we know. Foundry has previously experimented in this space with a pure cloud offering called Athera. And we learned firsthand that having all of your data going in and out of the cloud is challenging in terms of workflow, performance, and cost, especially for VFX workflows.

Our data sets are massive, and cloud providers charge for something called egress, which means that fees are incurred whenever data leaves the cloud. Streaming 4K images out to your screen all day can get pretty expensive. So what's the reality in the meantime? How do we bridge today with the promise of a cloud-enabled world like the 2030 vision? Well, we think one solution is to make our tools smarter about how they manage data. If you could reference data in bundles where your application has knowledge of where that data lives, whether it's in the cloud, another facility, or somewhere on your disc, then we can teach it, how to move data around in order to optimize whatever workflow you're using. So for example, let's say I wanna train a machine learning model. With smarter data management, my application can find everything that needs to be referenced in order to do that, and send it all off to an offline machine, fire up the training, and pull everything back.

Even better, this round trip can be encrypted and secure. It can leave an audit trail, and it can notify the rest of the pipeline when it's finished. We call this type of workflow the Smart Data Platform. It's an early stage project in our research team, and it's designed to allow studios to extend their pipelines and workflows into the cloud by making applications smarter about data, specifically the Smart Data Platform references data using IDs, using bundles, content hashes, but not the old error-prone file pass. Metadata like relationships between data, permission models, and versions are all held separately from the data itself and can be defined and made available to the pipeline in whatever way it works best. And finally the Smart Data Platform makes the location of data something that applications can reason about.

This allows tools to scale horizontally, meaning they can more easily replicate pieces of your data to many machines that can operate on it in parallel and then merge the results back. So the Smart Data Platform is something we've just started working on, and we hope to be able to share more details as the project evolves in the future. So that was a lot. Let's go back to the basics. We're working on machine learning in order to make your tools work harder for you.

We're working on real-time workflows so that you can work faster, and we're working on distributed data, so your tools can just crunch a lot more. Be sure to check out CopyCat, which will be released in Nuke 13.0. Let us know if you're interested in Genio, which is our game engine interop tool. It's in an early alpha stage. And we're looking forward to sharing the results of our SmartROTO project when that wraps up later this year.

And finally, you can find some great articles about data management in cloud on our Insights Hub on foundry.com. So check this stuff out and let us know what you think. And now we'll take some time to answer questions.

And once again, thank you very much for your time and really appreciate having you here. - Hello. How's everyone doing? So we've seen some fantastic questions coming in. I'm also happy to take any live questions. So a couple of the popular ones, that are noticing here are around Hugo and questions around CopyCat for Mac. So, to answer, so, do we ever think that CopyCat will ever work in a Mac system? I really hope so.

We all use Macs for development here, and we love them. So I would really, really want to support them. I think that as Matt and I have mentioned in the chat already, the challenge around, that is that CopyCat is obviously built on PyTorch. PyTorch doesn't have great support for non-Windows or Linux, non-Nvidia hardware, but it's not impossible. And I think we're talking to the right people about making this happen.

So while we can't release it working on Mac at the moment we do really want it to work. I should say that it does absolutely work in CPU mode on the Mac. So you can still use your models, and the models will still be compatible across platforms, but you won't get the full benefit of the, obviously the hardware acceleration that comes with GPUs. - Great. How about the Omniverse question? It's a good one.

- Go for it. - Have a chat, yeah. So we are, we've been in conversations with Nvidia for quite some time, but Omniverse we're really interested in a lot of the workflows that it provides, and also the renderer, the Omniverse renderer is quite interesting, but we don't, we don't have a plan of record for supporting the Omniverse plugins at this time, but it's definitely something we're looking into. And we'd be really curious when you interact with us through Foundry, if you're interested in Omniverse as well or you're planning on using it in production, that's info we're very curious about, and we'd like to know. - Yeah, very excited about Omniverse.

A question from Gregoire. Do we need new licenses for these features? So for CopyCat, it will be released as part of NukeX on Nuke 13.0 As for Genio and any of the other kind of alpha features that were mentioned, they can be tested right now, as long as you have up-to-date and maintenance subscription. And Chico's question, how slow is CopyCat on CPU and Mac? Not as slow as you think it is. Mac's CPUs are pretty tasty. So, I mean, for example, when I, like, as I said, I'm on a Mac, and I've often built my own, like not models but I've used the Inference node to generate sequences on my Mac.

So still usable. - There lots of lots of great questions. - Lots of great questions. - Yeah, Dan and I have been very active in the questions part during the presentation, just because we just love the energy. It's been great. - Also really interesting to see on the poll where, (voice cutting out) most interested.

The bring your own machine learning model seems to have scored very highly. Very, very happy about that. So, yeah, so kind of to sum up some of the answers that are answered in the questions section here is, yeah, so far with the, when we're releasing Nuke 13.0 shortly,

then the Inference node is one of the, it's the kind of the sister node to the CopyCat node. And that's the node that you actually do your, you take your model and you actually do your inference and get your pixels for sequencing. At the moment, so in theory, it can take in an arbitrary PyTorch model. But what we're looking at at the moment is exploring kind of the level of support that we can provide at the moment for it. So at the moment, it's roughly four channels in, four channels out, with just same resolution in and out.

But we do realize that a lot of people wanna bring their own models and they wanna take models from GitHub, models from academia, models that they have from their own facility and run them natively, and so we are kind of interested in figuring out what types of things you need to do. So running resolution, allowing a change in resolution between input and output images is a big one, allowing temporal, temporal inputs that sort of bring in multiple frames to inform the output. But if there's any other ones then please do share. If there's any other kind of requirements please do let us know.

- Yeah. There is a question from Bathsim about does Nuke have live output now when using Genio? So that's a really great question. So Genio is not for live output use case. It's for kind of the the reverse of that.

Genio's really about bringing data from Unreal into Nuke and allowing you to finesse and finalize it in Nuke in a way that gives you all the capabilities of compositing and all of the tools afforded by Nuke. So it's not really about a real-time tool. It's about bringing real-time tools into composite workflows and being able to finalize them. And we've seen in our early alpha group, we've had people working on, for example, CG productions or wanting to bring assets in from Unreal and being able to bake down, for example, cards or backgrounds. Genio is an excellent tool for that 'cause it allows you to take the output from Unreal and really kick the quality to the next level, where maybe you're, where Unreal necessarily falls a bit short in terms of getting you to the final pixel quality.

So that's really the use case we're aiming for with Unreal is really around asset prep and final pixel. - Excellent, and then on Chico's question on does every ML node work on the CPU? Yes, I believe so. So I think we're good.

Obviously Caveat is slightly slower. All right, then Michael Moorehouse, does the Foundry have any relationship with UK universities for potential research PhD opportunities? So the, what was I gonna say on that? I mean, we do actively engage with a lot of universities both in the UK and worldwide. I'm certainly open to collaboration.

You don't need, there isn't any particular university that we would kind of say no to or any kind of opportunities that we wouldn't investigate. So, yeah, I mean, always happy to talk. Feel free to contact me at dan.ring@foundry, and we can chat more. - So a question here about, I think, supporting Alembic exporting, and we have plans to explore this area.

So we haven't really looked at Genio too much in terms of Alembic exports. Genio does support a sort of, sort of, quote unquote, deep workflow. Pick up Genio and check it out.

But we are able to render out layers from Genio that you can bring into Nuke, but not not really in terms of a Alembic. However, I know there are probably some Epic folks in the chat, and they would probably love to say more, so check the chat. But there is an Epic, an Unreal engine roadmap, and I know they have all sorts of amazing features on there. So you can check that out, and you can make suggestions through Epic for supporting things like Alembic.

We definitely see a lot of cool stuff on the horizon. We're in conversations with the folks at Epic as well about how we can use those features to make interop with just a wider set of production tools much more frictionless. - Yeah, the other thing to kinda tie in there is even if we don't, Epic, Unreal doesn't support Alembic exports, and they do support USD, and Alembic this with some of the USD tools in Nuke, then you obviously get a kind of a workflow that where you can start, you can have your, have your geo and reflect It in both package all at once. And then what Genio is doing then is providing the sort of the image overlay on top of that geometry.

And then the brilliant, I think Jen has just said that Sean has just responded. Fantastic, thank you, Sean. (indistinct) Question from Nick, on the speed of viewing a frame, Is it basically render a frame from the real-time engine? So as Mat said, yes, not real time. There's Genio (indistinct). So it's, there is a bit of a delay in kind of prepping the scene and doing the kind of stuff in order to construct pulling pixels, but then once kind of pixels are ready and the scene is built, the overhead on getting more frames is quite quick. So if you're rendering a sequence, so the first frame might be slightly slow, but rendering a sequence then it gets faster.

But it's certainly not painfully slow. If that's what you're looking for. Dan, had a question around, well, not this Dan, Dan Anstrom, around roundtrip interop with 3D packages, like Maya, Houdini, et cetera.

This is one of the reasons we're so intrigued by Omniverse, this idea of having a hub where you can connect many packages together. So, I think in terms of building one-to-one bridges that are kind of different for each individual application, Omniverse has a very different approach, which is quite interesting. So as I said, that's definitely something we're looking at around that problem space, but it's a great question. So the timeline for that is, kind of, it's around a wider problem that's also on our radar.

- I've also just seen a question here from Hendrick on is there a way to train models in Nuke 13.0 from the command line? So it's interesting. So in theory, it should be possible. We haven't done extensive testing on this yet, but other than we have, it has worked. I think there was some issues with resuming training, but we're investigating that.

But ultimately we do plan, obviously, to support training on the command line. That's it, that's a big thing. And also what wasn't mentioned here is investigating how we scale horizontally.

So obviously how do you put the full render form to use from machine learning? And that's a much bigger problem for machine learning in general, and just how do you, how do you scale up across multiple machines and across multiple GPUs within machines? So it's a big area that we're investigating, but I think we, yeah, it's on our minds. - There's a question from around whether the support, lack of support for Python 2.7 is due to PyTorch? They're unrelated. So Nuke 13.0 is built around Python Three. And it's kind of unrelated to the considerations around PyTorch. - Any other last minute questions? - I feel like I'm missing something really important in this treasure trove of interest.

I just wanna say thank you everyone for- - Yeah absolutely. Yeah, really insightful questions here. Thank you for your questions. This has been brilliant, really enjoyed it.

And thank you all for joining us. And there's a question here from Samuel, has Genio a launch date? It doesn't have a launch date. We're still, it's still very much in the investigation phase. We're seeing kind of where it works and where it makes sense. Obviously we're like the more we kind of dig into it the more you kind of realize kind of the power that this sort of bridge to a real-time game engine has and makes it very, very exciting.

So we wanna make sure that we're building the right tools in the right way for the right people. - I'm posting the link again for the survey, if you're interested in Genio, Katana, Nuke Interop, and a bunch of other features we have, we're collecting as much info as we can. So please, engage with us over the survey, and it would be really helpful.

And we'll be able to get back to you as well when we can show you something that we're working on. As Dan said, Genio, in particular, is very early stage. We're really excited by it, but it's one of these, it's one of these things where we, we're trying to figure out what its final form is going to be, and once we do we'll be able to talk more about how it will exist as a product. - And then a question here from Sebastian, can you improve the first render with CopyCat like add more frames to improve? Yep, there is a retrain function. This is one of the, the kind of the powers of CopyCat is like as you add more data to it, you can take an existing model, improve it in one way with some extra frames. You can also, then, if you didn't like that, improve it with some different frames and take it off in a different path.

So yeah, we're really excited to see kind of how this sort of brunching approach to extending CopyCat models and retraining CopyCat models works. - Alex Hughes has a question about, OFX plugins for machine learning work, and will the bring your own model system be C++ level? Or will you be able to provide a serialized model? So I think the, we haven't fully defined this feature yet, but I think the area we're going towards is if you have a PyTorch model we'll be able to have provisions for connecting your own hyperparameters into that. Dan, you can correct me if I'm (indistinct). - No, that's it. So like yeah, PyTorch, Git model, so, yeah, I think as we go along, I think we're gonna kinda be releasing examples of exactly how you do this sort of stuff (indistinct) for C++ level at the moment. Just because I think the, again, it's still very early days and it's not clear yet exactly what level of support we need to deliver.

I think so far the feedback we've gotten is just supporting PyTorch models natively will be enough, but Alex, I'm very happy to, if you wanna, again, if you wanna ping us with any kind of extra requirements of how you see a C++ level model working, yeah, happy to hear. - Go for it Dan. - Yeah, cool.

A platform to exchange (indistinct) models currently? I suppose not yet, but what I'm really excited about, I really hope that we see things like this appearing on Nukapedia and kind of forums and things like, I mean really this is, I mean, I feel like, Ben, who you might've heard of on the the Nuke 13.0 Foundry Live session, his team have been doing all this fabulous work in CopyCat. And I really hope that the kind of the frameworks that they've built, like the kind of the foundations on which everybody else can do this and deliver their work.

I really hope that that ecosystem grows. Really, really excited about this. So yeah, I mean, I think if there's a platform to exchange models, I mean, I think Nukapedia's probably the easiest one out of the gate, and we'll see where we go from there.

- Great. (laughing) Sadness, sadness, we'll no longer be able to debug with print F, yeah, I mean, I guess, someday we'll be able to have these machine learning models talk back to us, but not yet. We're getting the cane, Dan, for to being pulled off stage. This has been really great.

Thank you so much everyone for your attention, curiosity, and it's a pleasure for Dan and I to be interacting with all of you this way. So thanks so much. - Thank you all very much. Have great evenings, mornings, wherever you are. We'll talk to you all soon.

2021-04-19 23:40

Show Video

Other news