Lightcraft Technology Real-Time Demo & Showcase | Production Summit Los Angeles 2023

Lightcraft Technology Real-Time Demo & Showcase | Production Summit Los Angeles 2023

Show Video

[MUSIC PLAYING] Why don't you come up on stage? So ladies and gentlemen, this is Elliot Mack with Lightcraft Technology. And I'm just going to talk to Elliot for a minute while we're waiting. [APPLAUSE] To get them all going. So Elliot, tell us about Lightcraft.

All right. Lightcraft started quite a while ago. And it was actually in the mid-2000s. And my background was originally robotics.

And so I originally designed the original Irobot Roomba. And so I had finished shipping that. And I got interested in visual effects and movie making and storytelling. And sat down about, hmm, I should learn this.

And did about two shots. And said, how does anyone ever make a movie? This is the hardest thing ever. And it was shocking, in a way, to see the difficulty of it. Having come out of just been building robots.

And I'd been using software to build robots that I never thought twice about it. I'm like, OK, I need to do this. Put a gear here. And I could do that.

And it was shocking to me to see-- to do what I thought was going to be the simplest thing. To put some live action and computer graphics together was just so shockingly difficult. And that, in many ways, is the origin of lightcraft.

And I thought, wait a second. How hard could this be? So what you have put together-- and I think, Eric, are you ready? All right. So let's-- Eric Gessler from Global Objects. Eric is also one of the producers of this fabulous two-day event. So why don't you come up on stage and switch with me.

[APPLAUSE] And then these guys are going to talk about a new, ultra lightweight virtual production solution. And then they're going to share a new demo. And we'll have some time for Q&A, all right? So everybody, let's have a look at our actual start.

[APPLAUSE] So I met Elliot five years ago. Yeah, NAB. Yes, we were NAB.

And he was fully immersed in the virtual production world before the whole buzz of anything was happening. And he was solving really complicated problems with lots and lots of technology. Complicated camera tracking issues, how to integrate stuff into productions. And he was doing this years and years before the Mandalorian ever hit. So we met, and he's like, there's just a lot happening. And I'm not sure what direction we're going to go in.

And we sat down one day, and we picked up an iPhone. And I said to him, and he said to me, it has to go here, right? And so that was the start of-- and then he confirmed it with a bunch of other colleagues. It's a fascinating aspect because the theme of Lightcraft and Jet Set is, which is what we're going to be doing, is ultra lightweight virtual production. Because I had spent the previous-- I spent the previous decade on stages, much like this one, building large, complicated virtual production systems for a variety of TV shows. Once a time, Pan Am, et cetera, being-- and they enabled the whole area of genre of television, of fantasy and science fiction production, because of the speed of production and tracking. But they were heavy.

And I spent a lot of my life getting road cases from point A to point B. And we were ready for a different approach. So we're going to show you some Lightcraft. So here we go.

Fantastic. So Lightcraft is an ultra lightweight virtual production tool. And it runs on an iPhone or an iPad.

Go ahead. Give them that. I'll give you a quick background. Again, we were originally founded in 2004. And as you can see, I'd set out to-- the interesting thing is when I set out-- I started the company.

The original plan was to build a lightweight indie virtual production system. And we missed big time. That's one of the production installations in Brazil. This is a setup over at Zoak in Culver City. This is a telenovela, I think, in a Mexico City. So we built to solve the problem at that point.

We had a lot of hardware. And it was heavy. But it worked. We could do this. And some of the shows were shooting for thousands upon thousands of days.

And we learned a lot about the needs of a production pipeline. But it's just expensive. And the other key piece is we were doing the front end. We were doing the tracking, the real time compositing, et cetera. And we were leaving the back end, the automated post production, up to the visual effects vendors.

And some of them could do that. But it was too much for a smaller crew. And right around then, as Eric said, all these things happened. I had originally rented my first cubicle when I started the company in Boston from Bill Warner, who was at Founded Abbott, and was running a small incubator group. And so he'd known me for a period of time. And around this period of time, I called him up when we were building these heavy systems.

And I said, OK, I need help. There's something magical here. I can't figure it out on my own. There's something going on. The fateful April 2019 meeting, all these sorts of things, and the concept of doing this for everyone. And noticing around then the iPhone tracking started to work, and NVIDIA and Omniverse started punching a UST hole through the whole world of transferring 3D from one point to another.

We realized, wait a second. We can actually-- we can automate. We can do the thing that we set out to do and build a really lightweight, fast production pipeline from end to end.

But there are different needs. Previous end and production, you just want to be very, very lightweight, as lightweight as possible, because things are changing, things are moving. We're not doing this shot. We're doing this shot. And you don't want to be slipping equipment everywhere just to do a turnaround.

But post, you need a sledgehammer. And there's just no two ways about it. Those files are huge.

So one of the problems is, so when you start mixing visual effects and production, production is like incredibly fast-paced. And visual effects is very slow and tedious. So when you crash those two worlds together, you end up with-- you have to come up with systems to keep things fast paced, but also be sure that you're tedious and you're getting all the right stuff. So that is why there's been, arguably, some hiccups and bumps in this whole world of virtual production, because you're trying to do something that normally takes a room full of people who are super smart and stay up all night eating Hot Pockets and drinking Red Bull to figure out.

And guys, you just want to get in and get the shot. Those are the worlds that you're sort of emerging. This is something that Elliot has done pretty well. That's exactly it. And so what we decided to do is, we're going to split the problem.

And that's what we built with Jets. And that's one of the things we're going to be showing today. And again, the concept is ultra-lit virtual production. It's an iOS-based system. We took all of the knowledge and the learning we've had from a decade on these big shows and these big sets and put it into an app.

And it's doing all the neat things we used to do. We can render 3D USDC files from Blender and Unreal. It's easy to set your live action origins. We'll go through a demo where we show you how to match things. We're doing depth occlusion.

You walk around behind things, it works. We handle the data automatically. When you record, you hit cut. All this stuff is transferred over in an automated fashion. And we also have remote operation because you need to be able to see what's going on in Video Village.

So let's go to our little remote operation. And all right. So what we're going to do-- pause if we can start our little video feed.

There we go. All right. Oh, can you see this? Oh, OK. Are we-- I am mirroring. Let me exit out of PowerPoint.

Yeah, you might want to take that one. There we go. There we go.

All right. And let me start my video. All right. So how are we doing with that? Are we-- there we go. So what we're doing here and what you're seeing is everything is actually being rendered real time on the phone.

Pat's walking over there. He's tracking. All the tracking needs to be done inside a phone.

We have a fully animated USDC scene running inside the phone. And this is a preview. This is the whole concept of what we're doing here, is you leave the heavy scenes back in Blender, Unreal, USD, etc. Wherever the heavy scenes need to be.

And you export a lightweight proxy that can run in the phone, but it all works. Animation works. Real motion works. And-- Camera's being tracked. You get all kinds of different elements that you can start to play with.

And so that's wonderful. But we want to-- when we started, we really wanted to be able to integrate live action in CG. Eric, can we have you pilot? Yeah, sure. So what Pat's doing is he's setting a coordinate origin on the chair.

So as you see, it's detecting planes in the scene. This is the UI. You're seeing the UI great. And he just is going to set an origin on the chair. I think that missed the chair origin. Let's try to-- He missed the-- He's re-picking the scene.

And what we're using is the onboard sensors on the phone to detect the scene, lock the scene in, and do all these things that are traditionally done in a big virtual production stage. And we're going to set a scene locator to match the 3D to the live action. So now we've locked in the scene locator of the moving scene into that. We're going to switch from our CG scene over to AI mat so we can extract Eric from the background of the scene. And we have Eric flying. Hey there.

Red 5. Go on in. Now, one of the fun things here is that you'll notice an utter absence of a lot of the things that we associate with production. No HACSDI cables. No wires.

No nothing. We just brought a portable router, set it up, and fired it up. You can take this entire production workflow in a backpack. Laptop, phone, stabilizer, and a router if you're feeling fancy. Occlusion is fully working. We can walk all around.

Full 6 degree freedom tracking. Everything's matched and locked. And it knows person really well.

It doesn't know the chair, but it knows the person. And what you're seeing right now is the mat edge is rough. That's because we're using the real time AI matting on the phone. And we are actually rebuilding our key here as we speak so that we can have one touch of garbage matting and green screen picking. Not done yet. It'll be another week or so.

But we should be able to key at least as well in real time as we have traditionally on our other big systems that are used for finishing. Yes. All right. Well, so this is exciting. So I think we should record a tape.

Let's go. Director, what do you want me to do? All right. Pat's going to hit record. You're going to start back on the ground. Pat, no worries. We're going to reload real quick.

We're chasing down an animation book buried deep within RealityKit. And we've almost got it. There we go. All right. So we trigger the recording.

He's moving. Canopy's coming down. All right. You are lifting up. You are-- Thunderbirds are go.

You're going to get ready to go save the rebellion. And as we're moving through, we're tracking. Occlusion is working. Everything's flying. All right. You're going to go up, up, and away.

X-Floot is going to open up. And we are flying. All right. All right. Let's go. There we go.

All right. So that was fun. Well, that, we're halfway there. Because-- let me go back to our slide. Now we've got to go to post-production.

So we just recorded all this, which is fun, and flight, and fast, and interactive. Well, here comes post. What are we going to do there? And so we actually have two systems, JetSet and AutoShot. AutoShot does exactly what you think.

It's an automated post-production shot creation tool. Runs on the PC and the Mac. And it takes all the camera tracking data we just recorded on the phone. Pulls it over a local sync.

And lets us drop it into Blender or Unreal very, very quickly. And everything matches up. So let's-- there we go. All right. That's my slide sync.

And it's not quite one-click post. But we're a shot in that direction. And it lets us automate things. So the goal of all this is to not wear out your team just making two shots. You want to make a show. And any time you're dealing with a show, you are looking down the barrel of dozens, if not hundreds of shots, thousands of shots.

And automation is going to be what holds you together. And the big production facilities are like, yeah, of course. But as soon as you get it down for the small production team, it's very, very rare for them to have the internal automation workflows to do that. And that's really what we're building this for. We're building this system for a small team to be able to punch way above their weight.

If they have talented people, this thing is designed to just do a vicious removal of the repetitive VFX dredge work, file handling, and all the things that take up people's time. So all right. You get to focus on the creative stuff, the fun stuff, instead of the heavy lifting.

All right. So this is AutoShot. This is the ugly button version.

The pretty UI is going to show up in another couple of weeks. But-- I have been begging for the time. So all right. So we have-- let's clock in. And what we're going to do is it's going to detect in the client that we have.

And we're going to actually click Sync. We're going to pull in-- I'm going to choose the day that we're going to see to. We're going to pull in files. As you can see, it's pulling a lot of files over.

So as long as Auto-- so as long as JetSight is running on an iOS device, you are on the same network with the laptop, it automatically will pull in your takes. So your takes are now coming up. So we've just pulled in all of the takes that Pat just shot over in a few seconds. And now we're going to go back to the Blender scene. And we picked-- this is our original Blender scene that we're going to match back into. We're going-- our system shows us that during the take, we used the scene look pilot, which was the 3D node that we placed onto the animated pilot's chair.

And we're going to use that in the same Blender scene. And we're going to tell it to save and run. And what it's going to do is it's going to take that information and generate a shot. And do all the-- in the background, it's going to be extracting the frames and putting all the pieces together. So it loads your shot. It sets up your environment.

It sets up a composite. It sets up your 3D environment. It sets up your keyer. It puts together-- go ahead. All right. And so we have a shot coming in.

And I'm going to turn off-- I'll explain all-- we have a viewport compositor going in. I'll look at that later. All right. I actually got the wrong shot. Hang on. That doesn't look like-- I forgot the test shot.

I'm going to put the last skinnier. All right. Oh, I got it. Later shot. I think we were on take 11. All right.

Take 11. There we go. So I'm going to rerun that.

So again, we're going to extract all the files. And what we do is we have the original Blender file. And we're going to create a new Blender file that is linked to the original Blender file, but then also includes the information from this particular shot take-- an animated camera, an image plane with the shot on it, a material keyer. So we can do green screen work, as well as one of the new pieces that's in Blender more recently, which is the viewport compositor, which is a really remarkable piece of technology that they're adding to this. And we'll show Unreal a little bit later.

But we have the pipelines between Blender and Unreal are actually very, very similar. All right. So we just ran a new shot. There we go. And it's going to load up our new Blender file.

Oh, there's Eric. That looks like me. Yeah, that's where it's going. [LAUGHTER] All right.

So I'm going to disable our viewport composite for the time being, so we don't need that immediately. But what we have here-- I'll zoom in a little bit. And I'm just going to hit Play. And we have our shot.

So there's Eric. He's flying along. He's getting ready. Fod's taking off. He's going up in the air. Now you're going to notice something that, hey, our green screen could be a little bit better.

So let's fix that. That's such a mighty-- you're setting this. There we go. I'm going to pull this up and give ourselves a material view. Because in each version, we actually include a material keyer. And I-- There's an entire compositing pipeline that automatically got generated with the shot.

And I'll talk you through it. So he has got a green screen keying pipeline. He's going to go in there and do a better job of sampling the color, not just the auto one that we have. And he's going to fix it. So now we have a Blender funnel fully immersed in 3D with the shot that we just took.

That's running inside of Blender. It's out of school. [APPLAUSE] So this is really what I had wanted from the early days of Lightcraft, to minimize the time between when you have an idea and when you can realize it for real, manifested in a real production environment. And what we spent over the last four years is that the fateful NAB day is understanding the first principles between what happens between idea and a shot, and actually just going back over and over.

So this is fun. And the composite is nice. But frequently, you will need to have more sophisticated composites than you can have in a material keyer. For example, a material keyer, you can't do a blur.

Those show up in compositing. So we also have, on every shot, we built a-- I'm going to move over here. I'm going to pull up the compositing tree.

We built an automated compositing tree that includes all the pieces. Sorry. It's hard to talk and type.

There we go. So this is a fully automatically generated compositing network that does a green screen work composite, matches it into the background. And I'm going to turn off the original image plane. So we're going to have confusion. So if you've ever been a convert working on-- this is your first couple of days.

You set up your pipeline. You bring in your shot. You put in your basic compositing tree. It's your first day or two with the shot. And this just spits right on the front.

And so once again, we have a setup shot. Now, you'll note that he's not being included. There's a Z-depth compositing key in the blenders in the viewport compositor. It's not quite there yet.

So as soon as it's there. But the key aspect in this is that when you're doing-- and when you're doing finished compositing, a depth mat, even from the iPhone, isn't quite good enough to get a fine edge on that. So we actually automatically enable Cryptomatte in both our generation of our files and in the keying process so that you can actually click and drag and get your Cryptomats. So the goal here, again, is when you're dealing with a 2D 3D shot, you would really like to be able to interact with the shots.

The 2D side and the 3D side, both in the same environment immediately. And that's really the concept behind that. Now, the other thing is, do you have-- so the same process that you just saw that runs through Blender also works right now in Unreal Engine.

So there is an Unreal version of this that will do the same process. So if you're doing free vids or you need to work inside a real-time engine, there's an Unreal Engine version. And there's other things coming in the near future as well. So there's other supports. All right. I'm going to switch to-- switch from the PowerPoint side for a second.

The other really interesting thing about the pipeline is-- well, we'll talk about this. So I know what a lot of you guys are thinking is, when is that going to come to a cinema camera? When are we going to be able to get that same sort of thing on an area or on a red? So we are working on that as well. And with the latest updates that are happening with the Apple Vision hardware, there's a lot more possibilities that are happening for filmmakers doing work inside of a virtual environment. So we've already been testing what we're calling the second camera. If you look over here, what second camera is, is the iPhone is now sitting on the rails on a camera. And we've figured out how to calculate an offset between the iPhone and the lens of the camera.

And so what this does then is it puts a virtual production computer on top of your camera. And the same process that you just saw happens exactly the same. The difference is that we run a calibration that looks through both lenses.

And it then knows what their offsets are and what the different characteristics of the lenses are. So that will be working in very slow order. And so we'll do an Unreal speed run in the same way we did sort of a Blender speed run. I won't use the same clip. I'll use a previous clip we did.

The Unreal workflow is very similar. We use AutoShot to automatically take in the takes from Jet Set, process them into a take, into a shot. And we'll generate a command string that we're just going to copy and paste into Unreal.

And that's going to automatically create a level sequence in Unreal with a tracked animated camera with an animated image line with a material keyer and all in a animated level sequence. So all right, let's try that out. Here we go. [LAUGHS] All right, so I'm going to switch AutoShot over to Unreal version and make sure I'm not taking it. So we're going to-- let's change our directory over to our global folder. All right, and change that to North by Northwest.

Great. So this is a shot that we did when we were doing a remake, a re-imagining of North by Northwest. All right, so I'm going to set our-- just a little bit of background. We automatically break down our shots into shooting days.

And then AutoShot lets us pick which take from the shooting day we're going to be working with. So we'll work with scene one to one, take 17, and until it's saved and run. OK, so when they gave me a command string, I'm going to go over here and copy and paste that into Unreal.

There we go. And it's going to go through. Let's go find our scene locator. Then click our camera in. There we go.

Oh, I forgot to do something. I forgot to do something. [INAUDIBLE] Yeah, you're going to put the right scene locator in. All right, back here. And then for this scene we have, we're using a scene load floor run. So I'm going to type that in.

Yeah, the floor run. That was pilot. That was me. All right, let's try that again. Save and run. Create a command string.

Paste it. [INAUDIBLE] Put in your camera. Scene-- there we go. And let's go. And so he is-- there we go. Back up a little bit.

So we've created inside the Unreal scene. Navigator, throw Unreal one arm here. There we go. OK, there we go.

So what we have done is we-- oh, good. We've-- we've-- we created a level sequence, an animated level sequence in here. And that contains the tracked camera, the animated image plane.

And so as I move through Sequencer here, you'll see the camera-- the camera is moving, and he's running along. Now, an important aspect is the distance of that image plane is correctly computed. So on the iPhone, we make heavily use of all the sensors. So we're using heavily the ladder data from that.

And we calculate the correct depth of that image plane, roughly. And so as the person is walking through the scene, we are walking through as a 3D object, a 2 and 1F object, in real time. So you can see him running back and forth. So that's great, but you'll notice that we have an issue, which is that we have a green screen.

So let's go over and fix that, because we automatically create a material keyer. So we'll create a material instance. I'll drag that into our plane, our shot folder. All right, and so then we're going to click on that.

And we're going to enable our couple of our parameters. Just drag our camera original over there. There we go. And set a key. Great.

OK, love it. Save that. And then we need to apply this newly created material keyer over to our instance, which is over here. So I'm going to search for our material.

There we go. And I'm going to apply our keyer over here. All right. So now we have a fellow running through a lobby. And I'm going to switch to our camera cuts so you can see that.

Camera, all right. So as you can see, now he is running through the lobby, a tract. Now you may observe that occasionally his feet are piercing the floor. And that is because we are, again, accurately representing the centroid of the body. But depth-wise, his feet could be poking forward or backwards. So if you're on a matte surface, what we would just do is come down and we'll just leave a couple of these key frames on our image plane.

And I'll move this out to the extents. Sorry, it's a long shot. 800 frames. There we go. All right.

So I'll just remove our transform key frames. And then we're going to go back to our image plate and adjust him a little bit forward. And there he is. So we can just set him exactly where we want. And then lock our camera.

All right. And now-- whoops. Lock our camera cuts.

All right. And now he is merrily walking through our scene. Now he's merrily walking through our scene. All right. So that works really well for a matte surface where there are no reflections. But wait, you say.

We want to capture something with our reflections. For that, we actually would approach the problem in two passes. We would actually render out the scene with just the reflections in there and then composite it over in a more traditional process. So let me go find it on the desktop. There it is.

So we did. And I'll just play it back here. So what you see are procedurally generated reflections in the shiny floor and a more traditional composite running on top of them.

And all this is done without handwork. You don't have to sit there and do that. Now the alert among you will notice that they're not perfect. So for example, his rear leg is not exactly perfect, perfectly reflecting in there.

See if I can hit the Pause button if my hair's side is this. That's not the Pause button. That's right. That's the Pause button.

But for automated processing, it works pretty well. And you can always hand tweak a per shot basis pretty easily. All right.

So let me talk back to as we continue our speed run. There we go. So basically, we have both Blender and Unreal, a high speed process to do this. And as Eric alluded to, we have up and coming is Jet Set Cine. I'm going to grab the device off here. There we go.

Because most people don't shoot large scale movies they're not going to shoot big scale movies. They're going to shoot big scale movies. And so that's what we're doing. We're going to get off an iPhone. So we've been working toward this. And we have it working. It's still kind of ugly. So it's not a product yet.

But what we are doing is we use your own city cameras and lenses. We use a little piece of hardware to take real time video from a city camera and accent C-mode converter to put it into the iPhone. We have a separate companion app, which is Jet Set Calibrator.

There you go. And we don't need calibration boards. We detect natural features in the scene.

So if you're around the scene with a lot of stuff, you just sweep the camera back and forth and we can lock it in. So no calibration board needed, no targets on the set. And we use the same auto shot workflow you just saw with automatic clip detection and matching. The other thing to note is you'll notice there's a little device on the back of the iPhone. This is a cooler.

So what it does when you're using the iPhone for many, many hours, especially at its highest computer value, it gets a little hot. So we've found some off the shelf coolers that work really well with it. So this will keep the entire device cool and cool to the touch, especially when you're running higher frame rates, 120 frames a second. Sometimes you're going to need to do that to get super slow motion shots or other things that could work. So that's what the coolers.

But it gives me great pleasure to have this after having in my life hauled so many road cases to have a real production system. And so we've done tests to match it to 24 frames a second, 25 frames a second, et cetera. It works. And all right. And so in conclusion, one more thing.

So what you're seeing now, this is already on the App Store. So Jet Set Pro, the iPhone version of it, you can go to that QR code, download it, and you can be using exactly what you just saw today. Yeah, there's a free version of the app, comes with a couple of sets. The tracking works. AutoShot works. AutoShot is a free download on the Lightcraft website.

Our site is at lightcraft.pro. And basically, we want this to kick off a lightweight revolution in virtual production. Here you go. Thank you, Elliot. So you guys go to that code in the App Store, download it, download it, and explore. It's so easy to use.

It's so fun. And it's available today. And it works out of the box.

There you go. Thanks a lot. Thanks, everybody.

2023-08-29 00:18

Show Video

Other news