You can pretty much cover all of Asia from China with 3000 kilometer, and that includes basically all of India, Southeast Asia and, a good part of the, the Western part of Pacific, and, you can probably fly north enough to Russia where if you get, tanker up there to refuel over the Russian airspace, then you can fly and hit Alaska if you want to. So if this thing goes into, you know, service in 2031, 2032 range, it pretty much gives, the PLA like, control of most of Asia. Welcome to Manifold. Here again with T. P. Huang, a frequent guest or repeat,
recurring guest on our show. And, this is an extra that we're recording as a lead in to an episode that we had already recorded about the new sixth generation fighter jets that China revealed. So we had planned to release an episode on the topic of those sixth generation fighter jets. But because of the recent events with the AI company DeepSeek in China releasing some really interesting models, TP and I thought we would discuss that for a little while, maybe 30 minutes or so, before we get in to the, MilTech stuff.
And so the, the first part of this episode will be 30 minutes on DeepSeek and AI related issues. And then the last part of the episode, which I think, as I recall, was relatively long, like maybe 90 minutes, is all about, MilTech competition between the U. S. and China and balance of power in the Western Pacific. So TP, are you with me? Yes, I am glad to be back. Great. It's great to have you. I'm gonna do a little bit of intro on, DeepSeek and R1 and um, just feel free to add anything you want and then after I finish the little intro, we can just, have a, just a conversation about it.
So I'm gonna pitch this at listeners who aren't obsessively following the Gen AI space, but we'll delve into some details that I think will be of interest to people who are actual experts in Gen ai. And so what happened recently is there was a release of a new model called R1, by the company DeepSeek. DeepSeek is a very interesting Chinese company. It had its origins in a quant trading fund. But has since sort of pivoted to focus more on building foundation models, and they've been at the forefront of Gen AI research for a while now. They've produced some world class models.
Those world class models were trained using very little compute, just a fraction, maybe one thirtieth of the compute required by labs like OpenAI or DeepMind or Anthropic to train their models. They perform roughly as well as the best current model. So 01 from OpenAI, people would often cite as maybe the best model that's currently available at scale. and also the inference costs, the cost to run the DeepSeek models is only about one thirtieth of the cost to run, the current best Open AI models.
So it's a very jarring and surprising development that a small Chinese company, I believe DeepSeek has about 200 people on its team, was able to ship a model, which in some sense really is way better. I mean, if you count 30x as a big advantage, while having parity in terms of quality, is way better than what all the big labs in the U. S. currently can offer. And so this has caused a huge, shift in people's mindset about the AI race between China and the U. S. there are many different topics we could discuss here. We could talk about a little bit about the internals of how the DeepSeek models are so efficient.
We could talk about a breakthrough they made in the use of reinforcement learning for reasoning models. And we could also talk about the impact on infrastructure, AI infrastructure, planning, for example, the Stargate project, which wants to spend 500 billion on data center power supply and compute. So I think those are the three main things we can discuss. And then the other thing we could discuss, both TP and I have experience in really using the DeepSeek models in practical settings and we can talk about whether the quality of the models lives up to the benchmarks on the benchmarks. It's top of class kind of model.
But of course, the real world, effectiveness of the model is something that you can only learn by using it in a bunch of context. So how does that sound as an outline, TP? Yeah, sounds great. I just want to make it make one other point out there is that DeepSeek itself is actually was not well known, even inside China until the past few months. So I think I first heard of it maybe six months ago, maybe a little longer, but like middle of last year. And, so, and, and I'm someone that actually follows basically all the AI companies in China.
So I always suspected ByteDance to be the one that does the best. So the fact that basically we have this no name, let's just jump out of nowhere. This was like, not. this wasn't planned, this, nobody would've, could've guessed us, basically. And, so what they did is quite extraordinary.
You by, you know, Chinese AI firms. Yeah, I might have had a lot more resource. Yeah. I might've been on the DeepSeek, trend maybe before you, I sort of read these papers actually, and so I was very, always very impressed with the the way they write their papers.
They're very open. They release their models open weights. and so I was impressed with them from the get go. We should mention though, you, you mentioned ByteDance. Everybody knows ByteDance because they're the company that owns TikTok, for example.
ByteDance also has released a, just recently, I think last day or two, if I'm not mistaken, Dopao 1. 5. That's also a very efficient, hyper efficient, optimized model that also is comparable to O1, I think, on the benchmarks. So, ByteDance is not out of it by any means. Yeah, so I think maybe by the end of it, I would guess that It's, it's quite possible that big corporations and, you know, a lot of people in China itself will use Dobao just because of the fact that, you know, it has the, they're the guy that does the TikTok or Douyin, whereas, DeepSeek, I think got a lot of, press release this week and just generally since December. So, you know, it's, it's very possible that in the end, Dobao will, will be the one that's the best model.
They go, they'll be able to create the best model, but there will be so little difference. For practical uses that the one that's like far more open that produces all these great papers that people love to read that that's willing To has this great story of a small, you know, David versus Goliath kind of thing Will capture the heart of the AI community basically, right? So Yeah, I mean now is Dobao open source? I believe they open source only a couple pieces of it. I think the the main multi modal one I'm not sure if that one's I don't think that one's open source.
I see because the thing that's happening in the US is a lot of like there are a lot of companies that run servers and they can basically put up any open source model and then they charge people to use it And so that's already the case, like we, we can get access to hosted versions of DeepSeek R1, you know, offered by U. S. companies at very low prices. So that, that is a difference in, like, the way Dolbao maybe will get used in the West as opposed to in China. Yeah. you know, whenever, the problem is whenever, like an American company or anyone in the Western world wants to use a Chinese model, they're always concerned about sending their data to China, right? So open sourcing kind of get rid of that concern basically for most people. Exactly. I would think.
Yeah. So if I go to like cluster. AI and I. Use the DeepSeek model. The data is just flowing through the cluster, cloud instances.
And none of the data goes back to the company DeepSeek. Well, if you use one of those distilled models, you can even run this in your own servers at home. So, I've seen quite a few people running the seven. Billion parameter or a billion parameter distilled models.
And that's kind of interesting. I might have to set up that myself actually. Yeah. Well, okay.
Now you're, you're getting into some stuff here. So for the, for the users that the, sorry, listeners who are not AI experts, the word distill probably doesn't really mean anything. Let me just explain quickly why that's an interesting aspect of this. So, Along with releasing the core R1 model, DeepSeek also released a bunch of distilled models. And so what that means is they take a smaller model, which is open source.
Maybe it came from a Chinese company like QN, or maybe it came from Meta, like in the case of Llama. They took a small model and then they sort of use the technique called distillation, where they use the output of the big model to rapidly improve the capabilities of the small model. And this is kind of an open question of in the real world. So we have benchmarks from these small distilled models, but we don't know how they really function in the real world.
If in a practical setting these, for say a narrow application in the enterprise or something, if these small distilled models are, sufficiently good, then you have an even further reduction in inference costs. So not instead of 30X, you might have 100X reduction or more in inference costs if the small distilled model is actually good enough for what you want to do. And all of this has an impact for infrastructure planning.
So this Stargate project, where OpenAI and some other companies together, SoftBank, want to like spend 500 billion on building out huge data centers, the power supplies for those big data centers, and buy a lot of NVIDIA chips to stuff them with. It's rather amazing to me that the DeepSeek release kind of coincided with them pitching, you know, their huge, ambitions, and it's kind of funny because if, if inference ends up costing 30x or 100x less than what you thought, surely that would affect your planning for a $500 billion dollar decadal project, right? And so it sort of reveals that most of this infrastructure, AI infrastructure stuff, I call it the AI infrastructure grift, isn't actually based on any solid projections. Because if it was based on solid projections, the, the new information that came from the DeepSeek release would modify their calculations. And yet there's no sign of that. Yeah.
Like if you have like, a more powerful GPU at home and can run like the 32. 32 billion parameter, distilled model, then. You know, why do you need to, like, keep, um, ping OpenAI for that information to do the, to do the prompting, right? So I think that's a, that's an interesting part of this. Yeah.
So both, both the inference costs, because of the very, you know, clever, optimizations that the company DeepSeek made in its training process, just in the model itself, both the training and the inference using the model. are way, way cheaper than what people thought was possible. And so that new information I think has not been incorporated into people's thinking about infrastructure requirements. So that, that's one interesting aspect of this news.
for people who are really into AI and they follow it at the level of the foundation models. They'll know that one of the big advances recently was so called reasoning models. So models that in a way kind of talk to themselves, like you ask it to solve some problem and it sort of just talks to itself and breaks the problem into steps.
And, you know, ultimately this reasoning capability of the models lets them solve problems that, Previously, in just a sort of single shot response to a prompt, they weren't able to solve these problems, but like, say, a complicated math problem or physics problem or programming problem. But now, by reasoning with itself, the model can actually converge onto some very complex solution to a complex problem. That was the sort of main chunk of progress in the big labs, especially open AI. releasing 01, which is a reasoning model. what DeepSeek showed is that rather than in order to train models to have this reasoning, rather than having to take lots and lots of annotated chains of thought from humans, so, so the big labs were paying Humans to human grad students in physics or chemistry or something to solve some problems and record their thinking process. And that was used as training data for the models.
Deepsea came up with a very automated way of doing this using reinforcement learning. And so that that's actually conceptually the most Interesting. Perhaps the most interesting aspect of the new paper and the new work is that they came up with a totally different way to get these to create these reasoning reasoning capabilities in the models. And so for people who have a more academic interest in AI, I really suggest looking into that because that that is a super interesting new aspect of reasoning models.
I do wonder if OpenAI itself, behind the scenes, is also doing something similar, but because they haven't really talked about how they got the results that they did for O1, that we just don't know, like, what they're actually doing behind the scenes. Well, I do have insider information coming from companies that supply data that, so, so, it's actually not OpenAI or Meta or, Google itself that, generates this data for chain of thought, annotated chain of thought data or, curated chain of thought data, to do the reasoning training, they actually buy it, they buy it from other companies like Scale.ai is a well known company with, it's a unicorn, and they, they're a data provider to these big labs. And I know other companies that are also data providers to these big labs, these guys are making billions of dollars a year in revenue, not just providing low level cleaned data, but actually this chain of thought stuff. So I, I know for sure these big labs are spending a lot of money to get sophisticated chain of thought training data, which now possibly they don't need is, is not necessary if you use the DeepSeek RL method. So that, that's another like earth shattering piece of, this new paper.
Okay, that might explain why the Scale.ai guy was sounded kind of bitter on CNBC the other day. I think he should be bitter because, uh, like a lot of what, you know, he's built an infrastructure to like source human, you know, labor to produce this kind of data may not be necessary as, as better methods come online.
So, let's see, we've talked about, the technical innovations related to the model. We've talked about the. Infrastructure implications of the model. We could talk a little bit about the A. I. Race between U. S. and China. So I think prior to the DeepSeek papers coming out, the position of most people in America would have been even the so called experts would have been that China is some, you know, somewhat behind, you know, maybe a year or two behind the West or the U. S. in particular in Gen A. I. or particular large language model Gen A. I. And now I think a lot of people are realizing that
actually there's kind of no gap, right? Because if you just look at benchmark performance, there are models now in China, which are just as good as the best U. S. models. And going forward then, the issue is, well, the, the, the anti China hawks are kind of hanging their hat on, well, but eventually our sanctions will work, and the Chinese labs won't have the cutting edge NVIDIA chips to train on, and so maybe we'll recover. Some kind of lead and hold it.
What do you think about that? Yeah. So, recently I basically took a look at, some of the, you know, just the physical hardware buildups going on in China. And I think one, they have access to more chip than people think that there are. I mean, they're smuggling H100 into China is actually a thing like that. That's, that's, that, you know, that's been known for a while now.
But aside from that, the, the domestic chip makers, chip makers in China, especially Huawei, they're, they're making like, you know, I wouldn't say pretty good. I would say pretty rapid progress on the, development and scaling up production in China. And right now, I did some, you know, back at the envelope popular. You know, computation recently, just based on what ByteDance said they were going to buy from, Huawei a while, like a couple of days ago.
And I think it's, I could see easily that they, they're going to sell like 1. 5 million, GPUs this year to the, to the Chinese firms. So, and, and I do see this number going up over, over time, I'd say.
You know, get more seven nanometer or maybe five nanometer capacity in there. And, one of the interesting thing recently that happened is that, one of the Chinese chip makers, CXMT, who produces, DRAMs, they finally moved on to the DDR5, generations. And, you know, based on, what I've seen, you, you do need, the, the level of chip that they, memory chips that they produced is sufficient for making the HBM3 that is now, like, that type of chip is now prominently used as a memory for the latest AI chips, for example. So, so aside from just the AI algos, the models themselves, they're actually also building like a hardware ecosystem underneath it, which is, you know, I think it's quite a sustaining process.
So, so I don't, so I think like just, just looking at things, this, this points to like a smooth, smoother path to them going forward. Right? So let me, uh, just for the more expert audience. So the, the latest Huawei chips, the Ascend B or Ascend 910B and 910C, if you had to map that onto a more familiar NVIDIA product.
How would you describe them? Like equivalent to an H800, equivalent to an H100? What, how, how would you describe the rough capability of that, of those chips? Yeah. So the, the previous generation SN910B, which I think they might've stopped production now, is more akin to A800. So the, you know, the slower, slower interconnect speed version of A100 and the 910C is more like, H800 because it's, you know, it's, it has the same issue of lower interconnect speed and, using HBM2E instead of HBM3.
So, that's kind of where they're at tech, hardware wise. So, if in the next year or two, there are millions of 910Cs available to the leading Chinese AI companies, and DeepSeek is now one of them, I'm, I'm guessing there isn't any meaningful disadvantage. I mean, there may still be a disadvantage, but it's, it's definitely not going to be decisive between what the U. S. labs, big U. S. labs can do and what the Chinese companies can do. Do, does that sound fair? Yeah. I always thought the issue with DeepSeek and some of the other smaller players in China were not necessarily the chips themselves, but just that they didn't have enough funding because you actually, aside from the, the access to chips, you also have to have money to actually rent them or buy them.
And, I think given the amount of chips that, we're seeing coming into marketplace now, I, there's still, there's still going to be like a GPU deficit between the U S market, the U S, AI hyperscalers versus the Chinese ones, but it's, it's going to probably not going to be a big deal going forward. And, you know, there's, there was assumption when they made the new AI export, like chip export rule, where they place the world into three category of tier one, tier two, tier three. I think part of the rationale is they, they, they thought, okay, we, we just can't export any, AI chip to China anymore. Right? And, I think that's probably too late.
My guess is if they had, enacted this from like 20, 2021. I think that would have been an issue, but because, you know, this, you know, the Chinese AI firms have like maybe a three or four year window to absorb the, the, the, the sanctions, so they're, they're able to adjust to it. And, and as, as you can see with, DeepSeek, you don't need as many chips as some people think, to train a model, right? And a lot of the inference is actually not even done in China. It's done in America by all the open source, companies that host all the, the models, Right? I mean, the famously for people who are familiar with the DeepSeek paper, you know, they spent less than $6 million, like five and a half million to do the core pre training for one of their world class models.
And that was a shock to Americans. Actually, a lot of Americans, if you go on. online, you'll see they still don't believe that this was actually done. But I think it's pretty clear that they were able to do it. And, so yeah, we may need much less compute than people thought.
If we are smart about the model architecture and various optimizations. My so my feeling is that maybe because there was never people didn't really ask any question about how much AI chips we actually need. Like, people just thought we need to get to AGI, and there was never any question asked about the resource, the money that goes into it.
that's why nobody questioned whether or not this can be done with less computation. Yeah, I think, I mean, there's a meme, there's a, there's a particular narrative in the U. S. now, so it's like, kind of like the mid to high wit narrative, which is like, Oh, well, gee, necessity is the mother of invention, and because they were GPU poor, the DeepSeek guys and other guys in China, they did all this hard work to optimize, their architectures, et cetera, et cetera. But that leaves aside the fact that, look, this is extremely G loaded stuff. Like many people could not, even if you did have the necessity, you were not going to come up with the invention.
And so they're just kind of overlooking how hard it was to actually, you know, implement all of these things and get it to run and, then efficiently trained models with it. I mean, it's, it's very, very non trivial. Yeah. And if you look at globally, there's basically nobody, nobody else was actually training competitive, like, model out there, outside of America, right? Exactly. My stroll kind of did for a little bit, but not really.
Yeah. Now, speaking of resources, there was a very, a lot of people are talking about the fact that the founder of DeepSeek, Liang, was invited to meet with the premier of China, number two guy, Li Qiang. And it's very plausible to me that they will not have any resource limitations going forward. Yeah, and I think I just want to make it obvious out there that like, computation power itself, I don't think it's as big of a deal as people say they are because in China, there's these public data centers, as part of the, EDWC project, East Data West Compute project, where there are these like, there's going to be these meg, like, 100 EFLOPS data centers being built across China at the moment. So, The issue with DeepSeek is they never had, they didn't really have access to this before. because, you know, they were not, not well known until like the past few months, basically.
They probably didn't get the attention of the Chinese government until like the past couple months. So, I think they're going to have much less, computation resource issues going forward. Yeah, I totally agree. When I was in China not that long ago, I had some meetings with some very high level people and we actually discussed, the planning. It's a, it's a, it's joint planning between the major corporations, AI leaders and the government about provisioning this compute that you're talking about.
And, you know, maybe at that time DeepSeek was not at the table, but definitely they're going to be at the table now. But the, but the main take home is, China is not going to be blindsided by this. The government is working with the companies to make sure that there's sufficient compute, that they can't end up way behind the Americans based on compute. And, they're also part of the, you know, the general policy setting process and, just, I think it's always good when, especially if, if you're calling these things Manhattan projects, it's always good that the government understands a little bit of what the private organizations are doing and, and making sure that the, the process help them in, in, in saying like in America, right? Like part one, the major issues with the entire AI rollout is, is a power constraint that we have here. You know, like part of the thing with Stargate was we need to build like nuclear power stations, a bunch of them, right? Whereas in China, there are, there are other concerns. So, what, so it looks like DeepSeek'ss, you know, got like a seat at the big boys table where they get to like, inform the, the highest level of the Chinese leadership on what's needed to accelerate the development.
Exactly. Now, I was going to say that, if, if you list the Chinese companies that have produced really state of the art models, okay, there, there, there, there's a huge list actually, but depending on what exactly what you mean by state of the art, but so DeepSeek clearly has one in R1. We mentioned Dobao 1. 5, which is also super optimized, has reasoning capabilities.
But there's Another one by iFlytec, which also in terms, at least in terms of benchmarks, is one of the top models in the world, and that was the first model, I believe, that was trained 100 percent on Ascend, on Huawei hardware, not using any NVIDIA GPUs, so it is clearly possible for them to keep pace just by using internally produced chips. Yeah. Like, as the issue with iFlytek was they got put on the entity list several years ago, so they had no choice but to use, Ascend chips from all the way from 2021 onward, I think. So, the fact that they were able to some keep the pace, and they do a pretty good job with like, multimodal type of models. So, yeah, so, so I think that, I, I think it's, it's, it's correct to assume that, the send chips themselves, even if they, are a little slower than the NVIDIA chips, even if they use a little more power, you know, if you get enough of them, they can do the job.
Yeah, exactly. So, I think, you know, the points we just made in the last five or ten minutes are really lost upon like, you know, high level people in Washington talking about the AI race or the chip war. And even people in the Valley, I think people in the Valley are not tracking this sufficiently well to have just even the conversation that you and I just had. Let's shift gears and just talk a little bit about the real world capabilities of the DeepSeek models. I and a bunch of people that I keep in touch with who are, you know, people who actually read these papers, for example, have been testing. DeepSeek V3 and also now R1, substantially.
I actually feel they're extremely good. I think they're on par with the best, say, O1 or any other Western model. I haven't used them for coding, so I can't comment on that very much. But in terms of just the ability to answer questions or solve difficult problems, I'm super impressed with the models and I don't think the benchmarks are misleading. I'm curious what you think. Yeah.
So, you know, I've also used them a little bit personally, but, I also had a chance to actually apply the, DeepSeek v3, just for, you know, some work related stuff where, it had to read some pretty complicated, documents and answer questions about them. And, you know, I just wanted to, you know, do some benchmarking to see how good the model is because, you know, if it is actually so much cheaper than it might make sense down the road to, to use it or something like that. So, what I typically run these things on are prompts, which are written like, very much for GPT 4. 0 purposes.
And then I use the same prompt without, with minimal changes. I ran them on, the, DeepSeek V3 and I found the results to be like, maybe 90 to 95 percent what the GPT 4. 0 performances look like. And that's without even tweaking any of the prompts. And obviously if each of the. Each of the LLMs, right, the prompt itself makes a big difference.
So, I would imagine if I was to tweak the prompts a little bit, I can probably get the same performance on GPT 4. 0 as DeepSeek V3. Now GPT itself, like, OpenAI itself has some other functions that they're offering now that are probably not available on like an open source model yet. But, just in terms of going through, legal documents and financial documents that, that I tested, which is not simple at all. It seemed to do the job really well. Yeah.
So at Superfocus, I know we are, you know, in the middle of testing to see, you know, to what extent we can replace, for example, GPT 4. 0 with, DeepSeek models or other open source models, and I know other founders of other. You know, applied a I companies, the ones that are really building products that are used in actual enterprises. Everyone is basically going through this testing now.
And so, like, six months from now, you could see a huge shift, where there's much less utilization of, say, open AI models and much more utilization of these really good open source models. It feels like the entire, like, AI community overnight is basically trying, testing the DeepSeek models out. whereas before they might have been using LLAMA, you know, just so they don't, they're not beholden to the Open AI issues and costs and such, such thing, right? The, the interesting thing I find about DeepSeek R1 is that, one, you know, you can run it on like your local computer if you wanted to, like the distilled models. The two, one of the main issues I have with O1 is that it's just very slow.
because it takes so much time to think about questions and things like that. So, and another issue that I personally have with OpenAI is the fact that sometimes it just goes down intraday for four hours at a time. And you can, you can imagine like if you're a corporation, corporation. And you're one, you're already scared of sending your private data to an AI firm.
And two, you need a hundred percent uptime on your system. I just don't see how you can trust yourself in like a, with, with like a closed sourced AI, if it's a, you know, if the open, if the open source AI is close to its performance. Yeah, absolutely. That's how I see it as well. And that's how I think other founders that are in the kind of the apply, not the foundation model space, but the people who apply foundation models to solve real world problems.
Okay, so we we've done a little over 30 minutes. any last things you want to say about the What I would call the DeepSeek R1 Sputnik moment that we just went through. Yeah, I think that, this AI race, and I really hate to think of it as an AI race, but, but, it's, it's quite significant.
And, my personal view on this is that, if we can lower the, you know, the price of AI to as cheap as what we're seeing out of these, Chinese firms have done recently, that's actually a huge plus for accelerating the AI utilization going forward. And open sourcing something gives, gives, probably gives more customers more confidence that they will have control of their, their data sources and things like that. So I think it's, it's actually not relying on closed sources is actually a major. help towards, just, full deployment of AI in the next five years. Yeah, I totally agree. That's my hope.
Yeah. That's my hope on this. And the other thing I've been paying attention to is the, the DeepSeek app. So I, I was listening to the all in podcast today, and they mentioned that one of the main. Treasures of like with one of the main, you know, properties of one assets of, OpenAI is ChatGPT, the app itself, right.
Or their website. And, I was looking up on the list and, and I guess it's due to the, the, you know, it's due to like the, the, the huge wave of, press they got recently, but, I saw the, DeepSeek app, skyrocket to like third on the iOS, list for, productivity gains. Like I think Chat GPT is number one, but DeepSeek is now above Gemini on the iOS store. It's also above, Gemini on the Google play store. So, yeah, so, so I think, you know, ChatGPT has a challenge on their hand from the app point of view.
I think DeepSeek can keep this going. I'd be curious to see like, just how much, the app side of thing make a difference going forward also. Yeah. Incredible. All right. Well, thanks for joining me again.
And for our listeners, now we're going to transition and you'll hear a conversation that TP and I had a few weeks ago about sixth generation fighter jets and military technology competition between the United States and China. See you in the next segment. Welcome to Manifold. My guest today is TP Huang. He's been on this podcast before. TP is a very, very active poster on X. And I suggest you follow him if you're
interested in technology, specifically in U. S. China technology competition. And he follows many different verticals, ranging from batteries, alternative energy, electric vehicles, A. I. And what we're going to talk about today, which is military technology. I've said it before, I'll say it again.
I think of all the people I know of, including people in the think tank business, in the Pentagon academics, nobody is following this hugely complex and important set of subjects with the granularity and deep insight of TP Huang. And I have no idea how he does it because in his day job, he's an AI engineer. TP, welcome to the show. Oh, hey, Steve. I'm really happy to be back.
Great. So to lead into this subject, here's TP talking to a common group of people that we know, maybe not a month ago, three or four weeks ago. And he's saying, hey guys, be ready for some big news. And everyone's like, well, what is TP talking about? What, what is it, a new missile or, you know, what's going to happen? Like drones. And of course, TP was referring to the reveal just, which happened just recently of two sixth generation, I believe fighters, but you'll clarify TP if that's correct, but two sixth generation, huge stealth airplanes that the Chinese military or companies that work with the Chinese military have developed. And these are really the first sixth generation planes that any country has, although they're not in full production yet.
So, TP, maybe you can, just start out by introducing, what it is we know about these new planes. Yeah. Hi, Steve.
So, what we know about this, the plane so far is that, they're in the initial phase of their flight testing at the moment. so, just to, put things into perspective, what we have right now is what we consider to be the sixth generation or the, or at least the new generation of, fighter jets. And, you know, previous generation, like the fourth generation would be, yeah. in China side, it will be J 10s, in America side, it will be F 15s, F 16s, and F 18s, and F 14s, and then the fifth generation will be like F 22s, F 35s, and then on the China side, it's like J 20s, so, just to put things into perspective, like China flew the, Fourth generation playing for the first time in, 1998 and, the aircraft joined service in 2004 and, J 20 project, first flew in, 2011.
And it joined service in 2018. So what happened, in December was that, we had the two Chinese, two major Chinese, fighter jet design Bureau, one in Chengdu, one in Shenyang. And they, they both had their prototype, not, not demonstrated actual prototypes, fly for the first time. And this was made public. so there was a lot of, questions out there about whether these are demonstrators, like how early in the process they are. but due to, you know, like since that the, the PLA watching community have followed in the recent, in the past few decades, we believe that these are actual prototypes.
And, so we think that the planes that fly now are going to join service in around the 2031, 2032 range, just based on previous two examples in the fourth and fifth generation. So they're still like, about seven years from joining service based on what we know. but, you know, they are definitely happening coming, I guess.
Got it. And from the American side, do we know anything about where the American Sixth Generation program is? Yeah, so, what we do believe is that the American side, there's two programs right now that, that we know of. One is the, the Air Force has what they call the NGAD project, the Next Generation Air Dominance project, and the Navy have the FAXX project. And, the NGAD project was, formed around 2014.
And, what we know about it is that it had, demonstrators or like X planes around that range, fly around 2020, maybe like more like, you know, there, so what happens is like companies like Lockheed, Boeing and Grumman, they will, try to submit proposals to try to win the contract. For example, for the F35 project, you had the X32 and the X35, between the, the Lockheed and Boeing, proposals. And then eventually Lockheed won the F35 contract. So, we are for the NGAP project. So a lot of people believe that they were going to select the, the winning proposal, sometime in the 2024, but that got delayed because of, cost issues. So, America currently has, is in the middle of what we called the nuclear triad renewal program, which is, basically we're, we're refreshing the, the three legged, nuclear deterrence and, the Air Force right now, has to shop a lot of money to, pay for the, uh, the Sentinel ICBM and also the B 21 bomber project.
So it was feeling a little cash strapped and I guess it was not expecting the Chinese program to proceed this quickly. So, For the NGAP project, the current, the current, status is they're basically punting the decision of what to do with the NGAP project to the Trump administration. And then the naval project, FAXX, they did say that they would like to, pick a winner of the pro of the project in 2025. But prior to this, they also have their own, budgetary issues.
And that is mostly also due to the nuclear, triad renewal because, we have, a new class of ballistic missile submarine, the Columbia class, and that is under construction because the Ohio class is getting too old and they need to be replaced at some point. So, from the Navy point of view, the, the submarines normally rank as the highest in priority, and especially the, the ballistic missile submarines are the highest priority in terms of the budgetary concerns. So, and, also, just due to the very, weakened, shipbuilding, industry in America, the, the warships, a lot of them are costing a lot of money to build and, due to that, there hasn't been as much money allocated for the FAXX as for NGAD.
So, while they're saying that they would like to pick a winning proposal in 2025, that remains to be seen. But so that's kind of where we are with the American programs at the moment. If the, if the U. S. had already chosen its winning designs for these sixth generation planes, at what point would those go into production? Generally speaking, I'll just use, F 35 as an example.
The, the winning program was picked in 2001, I believe. And then after the winning program was, was picked, it took another five years for the first flight to happen. And then after the first flight happened, it took another 10 years for the air force version to achieve initial operating status. So, you know, the F 35 program was kind of a mess. So I'm not sure that's the best example, but the F 22 program, the winning proposal, I think was, was picked in the early nineties and then it first flew in 1997 and then joint service in 2005. So generally speaking, we're like in the fifth generation project, you're looking just for like three to five years from when picking the winning proposal to the first flight.
And then another eight to 10 years for the testing to be finished. Now, many people would argue that, there was a lot of problems in the way that the fifth generation program was run and that they were getting rectified and that also because America wasn't feeling, peer competitor back then. So they, they just felt that, you know, they have more time. So, so the sixth generation program probably would not take as long. So my assessment is, the, six generation program, even if things going smoothly, it would take about three years from picking the winning proposal to, having the first flight of a prototype.
And it probably would take another 10 years, not 10 years, seven years for it to, actually achieve the initial operating status. Okay. So, but in terms of the, you know, back in the Cold War, we used to talk about the missile gap between the Soviets and us. Here, maybe we're talking about five years, three to five years, where maybe the Chinese have an operational sixth generation fighter and the U. S. doesn't have one. Is that fair? I would say probably closer to five to six years based on my own estimations. Like I would put, NGAD as likely entering service around 2037 range.
Okay. Okay. So the, but the main shock is like, maybe the U S side thought they were going to get to the sixth generation way faster than the Chinese, since they've been producing fifth generation planes since, for a long time, since the F 22 first came out.
Yeah, so I think that, if we look at the Cold War, even from the start of, jet, jet fighter, jet aircraft age, right? America has never fallen behind at any time. Like the Soviets had some good aircraft, but they were never able to match the, uh, the American pace at developing and innovating new aircraft. So if the most likely case is that the Chinese program does enter service first, it will be the first time in the history where, America has fallen behind in military aircraft and, you know, the, the American heart power is rested upon its air power, right? So if you have a situation where you want to fly, fight like a peer competitor 6, 000 miles away and, you don't have the best aircraft. It's kind of, there's not much deterrence there really. Right. So, so given that American, the American way of war has been predicated on air dominance basically forever, and they may not have it in the West Asian theater.
Let's talk specifically about the capabilities of these planes. So, so they're huge, they're stealthy, and they may have very advanced radar and electronic warfare capabilities. So maybe you could elaborate on that. Yeah. So one of the things that we noticed with the Chinese, the two projects, and, they're, they're very striking actually, it, it points to that something that the China identified as a weakness for America.
And, I would say is that American aircraft has been designed to be a little smaller than they should be. And the implication of when you have smaller aircraft are, you know, a couple of falls, one is. Because they're smaller, you can't carry as much jet fuel in them. So they don't fly as far. So that's one problem, but, but the more, pressing issue when it comes to the next generation is, what we are seeing right now with the F35 program.
so the one major factor going forward is that you're going to see the need for, storing, having more high power, radio frequency like radars and, you know, any kind of electronic warfare kind of equipments in your plane that will require a lot of, power generation. And they also, they require not only for the, the emitters, like the, the stuff that sends out the radio waves. And, receive the radio waves, but also the computation powers, the cooling capacities. And in the future, I would assume that these aircraft also need like a laser for, you know, what we call the direct energy weapons, against threats that are coming in. So. Why? Why would I say this has been this is currently a weakness for U. S. Air Force.
So we're seeing that right now with the F 35 program where it was originally designed with, you know, the cooling capacity for, I think, under 20 kilowatts. And, as part of the first upgrade, they were able to raise the cooling capacity of F 35 to 30 kilowatts. And, now they're saying that for the block four F 35, they have to make like this huge modification just to get the cooling capacity up to 60 to 80 kilowatts, and they think that it's going to future proof their requirements.
And, the reason why this is a problem is that when F 35 first came out, the radar itself was very advanced for the time. It was using, you know, what we call the gall, the gallium arsenide technology. So, the first generation of AA radars all use gallium arsenide radars. And these are the fundamental to, you know, sending out the, the radio waves and receiving them, whether, whether it's for radars or for the, for the electronic warfare portion of things and, the F 22, when he first came out, it had like a 20 kilowatt, radar power radar and F 35 actually has less than that. So, and you know, that's because obviously F 35 is a little smaller and also they, you know, they stuck with the gallium arsenide technology thinking that, you know, other air force can't catch up to them in any, quick, like, you know, rather speedy fashion. But unfortunately for America, like China actually caught up to them pretty quickly on that part of things.
So then The F 35 project with APG 81 uses, gallium arsenide, technology on its radars. And, It was probably using the best gallium arsenide technology that, you know, that was available at the time. But essentially they, they were still using what we now consider to be quite legacy technologies.
And, the, it uses basically the same, type of, You know, material as, the F 22, which also uses gallium arsenide. So that's why with F 22, you had maybe a 20 kilowatt for peak power on the APG 77. And, F 35, I think it was a little less than that. And, because, you know, F 35, it's a little smaller and it has a little less interior space.
So even though it did have a better version of the gallium arsenide. We didn't see, you know, like it was like significantly improved over F 22. And what we also know when we think about gallium arsenide is that, it is like relatively weak compared to the, the third generation, semiconductor material that's become available, like gallium nitride. So if you want to think about it, gallium arsenide is what we use, In our phones for what they call power amplification. And so that's, you know, take signals and then amplify the signal and send it out. And, gallium nitride on the other hand is used by 5g base stations and also, you know, the satellites in, in the space as they're trying to transmit data back and forward.
So basically. It's more ideal to use gallium arsenide for low power applications and, better to use gallium nitride for the higher power applications. So, so the F 35, has this limitation right now where, where it's using gallium arsenide ASA radar for it's, all, all it's like, electronic radar related stuff. And, we're, we're expecting it to go to gallium nitride with the next generation APG 85. But the problem with that is, now we're limited by this, concept where it only has maximum of 60 to 80 kilowatts of cooling.
So there's a limit to basically how powerful the radar itself can be. And I think, I think China really saw this as a possible advantage. So that's why when we're seeing the sixth generation aircraft being developed by China, they all have huge nose in the front. When you have huge nose in the front of the aircraft, and you can, if you take the picture of the, J 20 versus, J 36 side by side, and also. You can take the picture of the, the, the flanker suit 27 variant in China and the Shenyang, six generation project and put them side to side.
You can notice that the nose of the six generation project to be humongous, like they are at least twice as wide as the, the fifth generation or the fourth generation aircraft. So when you have something that's twice as wide, then. You know, you think about things from two dimensional point of view, then the area of it is like four times, at least as, as, as large. And that means is the radar you can fit in front of the nose can be four times as large. And on top of that. You have four times as volume to stick in, like, any kind of electronics you want to put in the front, any kind of cooling, you know, power generation equipment, thermal management, any kind of, CPU GPUs that you need in there to do computation and run these, large AI models control to control drones and whatever.
So the more interior space you have. The better it is. And, what I'm anticipating based on the size of the, of the nose of these aircraft is that, we could be looking at like megawatts, power kind of platforms with a sixth generation. So like maybe they, they will not be like generating one megawatt of power to start off from day one, but, they can be, you know, improved in the future generations to, support one megawatt or even a higher power generations. And, you know, when you think about, matching up like a future, block for F 35 was maybe 60 kilowatt of power. and then match it up against a Chinese sixth generation aircraft was one megawatt power, that's a huge difference.
Like, if you do it, if the radar wave you send out. Is, 15 times as much, then the distance, off your radar, waves can go, you know, four times as long and then it comes when it comes back, you know, it's, it's basically a, a power for relationships. So basically when the, when the power 16 times as much, your radar can see twice as far basically, and when it comes to electronic warfare, it's. You can basically suppress, radar.
So generate big signals as to receive by the other side, that's four times as higher. So there is significantly higher, the higher amount of power, electronic power coming out of these new Chinese aircraft. And so that's one of the main advantages they have. The other advantage that I think that people notice very early when they look at the Chinese sixth generation project is, is they are, none of them have tails.
So these are novel designs there. Well, you know, they're like, kind of like F 20, not F 22 B two and B 21 in that way in that they're more like, kind of like flying kind of design where, you don't really have a tail and that makes the aircraft really hard to detect, from all angles and by different wavelengths. So one of the things, that separate the, you know, the B21 type of stealth versus F22 type of stealth is that you're, you're stealthy to a different, to a different degree and from every angle, but more importantly, you're also stealthy against, like what they call the ultra high frequency radar, so radar where the wavelengths is much longer, um, when you use, flying designs, tail less designs, you're basically very stealthy against those kind of radars also. So from fifth to sixth generation, you see, you're seeing not only like a lot more power, you're seeing a lot more stealth and you're seeing a lot more range. And with the J 36 design itself, it uses three engine.
So the J36 is basically the Chengdu sixth generation project. And it has three engines. And the reason why it has three engines is because one, it needs it for the power generation. And two, it needs it because it wants to be able to, cruise.
So cruise at supersonic speed, without using afterburners and you can do it at. Based on the three generation, a three engine configuration, we think it can go as high as Mac 2. 0. So it can sustain, cruising as Mac 2 without using afterburners. And that makes a huge difference because then it can get to the battlefield much quicker without, using, using up a lot of the, fuel, because, uh, using afterburners are really not very efficient.
So, when you're, when you're flying, you'd like to, stick with the non afterburner, version as much as possible. So, what do you think the level of stealthiness is, i. e., what do you think the radar cross section of these flying wing designs is? You know, that's, that's something it's really hard to say, but I would, I would guess that it's probably significantly better than F 22, but probably, not as, stealthy as B 21s because, you know, B 21s aren't expected to fly at, cruise at supersonic speed and, make turns as much.
So there are, if you look at the, J 36, it does have these like, Surfaces on the back of the aircraft that, you know, is used to, try to maneuver. So there's, there's some really complex flight control software that needs to be written and tested to validate their, their performance because, you know, it doesn't have tails. Yeah. The control surfaces are just these little things at the end of the wing.
Yeah, They're not like, they're not like major deflectors or anything like that. I'm curious though, you were mentioning all angle stealth now, if you're directly above any of these planes, wouldn't they have a, wouldn't they actually have a kind of large radar signature? Yeah. I guess so.
But in, you know, In most cases, once you get to the point where, they're right on top of you, you're probably dead by that point already. Yeah. But I'm thinking in the future when, and we're going to get to this in a second, but when you have a crewed, you know, a fighter with pilots in it, a pilot in it, and then you have drones, the drones could be at maybe a much higher altitude and with their radars pointing down.
So I always wondered, like the, the, I don't want to say game theory, but. But the strategy or tactics of stealth seem much more complicated when I have multiple platforms around and they could be a very different altitudes. And, I, I just don't have never seen any full analysis of all this stuff, probably because it's classified.
Yeah. I think that's why I want the reason that fifth generation went from, it was really hard to find and like obtain weapon quality, tracking on F 35 to becoming not so hard for at least for, you know, the, the two major power of China and the U S because when you have a lot more sensors from different angle on different wavelengths, it, it became a lot easier to actually, first figure out which direction they came from. And then once you figure out which direction they are, then you can have your, weapon grade radar, just target that direction folks, all its power on that direction, and then it can. Because it's scanning a much, you know, smaller area, then you can obtain a weapon grade tracking from much longer range. So one of the big improvements from 5th to 6th is you have to get to a much higher level of, of, stealthiness. Okay.
So you've got a, a big plane. It's got potentially a big fuel capacity because the tanks might, you know, they're in that giant wing. it may have very large range. It may have, very powerful radar and electronic warfare capabilities.
How do you see these things being used? So I think, generally speaking, when these, aircraft first came out, the Chinese commentators generally said. basically some, something, something along the line of, Guam, you're in trouble or something like that. So they, we believe that they probably have, 3000 kilometer, combat radius in order to reach Guam and probably the actual, you know, ranges like, you know, probably close to 10, 000 kilometers just based on what people are saying. So these are, you know, they can go a long way and they can do the battle and then they can, you know, lead a bunch of drones and come back. So, we, from fifth to sixth generation, it's important for people to think about these new sixth generation systems as systems.
So, one of the questions that got raised, by people online was, Well, this thing is not very maneuverable, you know, if we get in a dogfight with this thing, then we're, then we can move much better than it. But, the reality is that, these aircraft are expected to operate like in a team it's expected to. Have a bunch of, what the, you know, the American, Air Force call us, CCAs or collaborative combat aircraft.
So a bunch of drones that, are operated by the, the, the piloted aircraft and, the piloted aircraft are expected to be, really high value assets. So they're expected to be a little further back and, the drones are expected to be operating further in the front. And in the battlefield, the, the, the role of the, the pilot aircraft is not only to, you know, find other, systems, but also work together, with all the drones and other aircraft that's in the theater. And then use what we call sensor fusion to, to have a full view of what they think the, the actual, uh, battlefield look like. So if you're trying to find, for example, F 35 directly heads on, it's kind of hard to find it because that's where it's most stealthy at, but if you have a bunch of drones in different directions and they're, and they're all. You know, scanning to find where the F 35 is, and then they can capture it from different angle.
And once you transmit that data to your, piloted fighter, then it can find it a much longer range. And then, you know, once you do this, you can see where all, once you have a situation awareness of where all the enemy assets are, whether it's the, the aircraft, the Naval ships, the land based radars. And, satellites, then you can make decisions on like what to do about them.
For example, In some cases, you might want to jam certain things like the satellite signals. In other cases, you might want to use electronic warfare to confuse the other side. So, one example is, of this is in 1996. there was a famous example for the, the PLA where, and They, they noticed on their radar system that there was a hundreds of like American planes coming at them. So then they scrambled a bunch of their old, uh, you know, very archaic second generation aircraft, like, uh, make 21 variants into the air. And then, and then once they got in the air, they realized there was no aircraft coming to them.
And then later on, they figured out that basically what happened was the, the American electronic warfare planes, most likely the EA 6B, basically just confused them, confused their radar, making them think that there's like hundreds of aircraft coming when there's like nothing. So that is part of the power of the electronic warfare is that you, you, you have like, a system and its goal is to confuse the other side into thinking that there are, there are aircraft in places or there are, there are like, stuff in places that there isn't anything there. And where there are things there, they, they think they're not there. So, an important part of the, the, the, the piloted aircraft in this, this case.
So the chase 36 in this case is direct. It&
2025-02-05 03:22