The IBM Quantum State of the Union

The IBM Quantum State of the Union

Show Video

I want to welcome you all to the 2021 IBM Quantum Summit. This is our flagship invitation-only event for the quantum computing community, bringing together the brightest minds in quantum computing to hear industry-defining launches, announcements, and perspectives on this remarkable emerging technology. Before I get started talking about our technical roadmap, I wanna take a step back just to remind us why we're all doing this. It's been said before, but it bears repeating.

We need to take better care of this thing. We need to feed and provide more energy for our growing populations. While at the same time, we need to reduce the effects of climate change. We have to stabilize volatile global economies and create more economic opportunities for all. We have to counter emerging viruses and bacteria.

We can't tackle these big problems by plugging away with the same tools that we've been using for decades. Classical computing, incredible as it is, is not enough now and will never be. We need something new. And now we think we are getting close. Today, as Dario announced just a few minutes ago, we broke the hundred qubit barrier with our 127 qubit Eagle processor. This is huge.

Computationally speaking, we are in uncharted territory, and that's very exciting. The first thing you do in uncharted territory is make a map. So that's exactly what we did back in February of this year. Back in February, we shared our Development Roadmap. This is our detailed plan to reach frictionless quantum computing using over a thousand qubits. And we still plan to do this by the end of 2023.

Now, anyone can say they have a plan to do stuff. The hard part is doing it. This is the difference between having a plan and a wishlist.

So let's see what we've been able to achieve this year. In February, we said we would demonstrate a hundredfold increase in performance with Qiskit Runtime. In May, we just did that.

In fact, we got 120-fold increase. We also said we would launch our 127 qubit chip in 2021. And we've just done that. Breaking the hundred qubit barrier is important for many reasons. We see that it marks a turning point in the development of quantum computing. We're no longer just focused on scale and quality of qubits.

We can now focus on the useful work those qubits can do. In other words, we can start to talk about performance. We measure performance with three key metrics, and we must keep improving all of them all the time.

First, we have Scale. Increasing the number of qubits in our systems is critical. It determines the size of the problems we can compute, our ability to scale is related to capabilities with our hardware technology, and IBM has had a long history leading the world in this.

We must continue investing in the hardware technology to ensure we advance every year. The second is Quality. This is a measure of how good our technology is at implementing quantum. We currently measure this using Quantum Volume, which is a benchmark we introduced to the industry in 2017, and it has been adopted widely. And finally, we have Speed. This is a measure of how fast our systems can solve a problem.

We need to be able to solve useful problems in a reasonable time, or we do not have a business. So we measure this by QPU speed. And we'll talk more about this later. Let's start with scale.

Let's dive into more details about how we broke the hundred qubit barrier with Eagle, and how we're going to keep driving scale over the coming years. I'd like to welcome Jerry to the stage to tell us all about it. Hey, Jay. Hey. Thanks a lot, Jay. Last year, we shared our aggressive industry pace setting development roadmap, committing to scaling our quantum systems each year. We named our quantum hardware system generations after well-known birds.

So we first delivered Falcon at 27 qubits to you all two years ago. And then at last year's Quantum Summit, we released our Hummingbird processor at 65 qubits. By the way, we haven't chosen these targets and scaling qubit numbers arbitrarily. In fact, the roadmap is fully aligned with the development of critical enabling technologies along each step, much like the technology nodes that resulted in semiconductor roadmap process or generations.

Now here, I'd like to showcase the core technologies that we're driving, the critical enablers of each processor family. With Falcon, our challenge was reliable yield. We met that challenge with a novel precision Josephson junction tuning process combined with our collision reducing heavy hexagonal lattice.

Then with Hummingbird, we implemented a large ratio multiplexed readout allowing us to bring down the total cryogenic infrastructure needed for qubit state readout by a factor of eight. Now this reduced the raw amount of componentry needed inside the cryostat. And now with Eagle, we've continued to leverage IBM's hard technology. This is something very deeply rooted in our history. Eagle was born out of a necessity to scale up the way that we do our device packaging so that we can bring signals to and from our superconducting qubits in a much more efficient way. Our work to achieve this is relied upon IBM's experience with CMOS technology.

And this includes through substrate vias and multi-level wiring technology. Now, here's a visual of what our Eagle processor looks like. It's two chips. The Josephson junctions base superconducting transmon sit on one chip and is attached to a separate interposer chip through bump bonds.

This interposer chip provides the much needed connections to the qubits through the packaging techniques, which are common throughout the CMOS world. And these include things like substrate vias, and a buried wiring layer, which is completely novel for this technology. And the presence of that buried layer gives us tremendous flexibility in terms of routing the signals and laying out of the device. I can categorically say that this is the most advanced quantum computing chip ever built. In fact, not only has it been built, but our Eagle has landed.

This is the world's first quantum processor over 100 qubits. And for that matter, it has 127 qubits arranged in our now well-known heavy hexagonal lattice. And let me stress this isn't just a processor we fabricated, but a full working system that is running quantum circuits today.

Here's the device map for IBM Washington currently deployed. And it is an exploratory system that the team is currently busy putting through its paces with calibrations and benchmarks. And in fact, here's a circuit that Jay actually just ran this morning, showing entanglement across four qubits. Now I know, our partners, clients and the rest of the world are dying to get their hands on this.

And our plan is to have Eagle wildly available at the end of the year. And most importantly, we are on track with our hardware roadmap. We said we'd give you Falcon.

And we did that. We said we'd give you Hummingbird. And we did that. We said we'd get you Eagle. And here we are.

Now, we next plan to scale to 433 qubits with our IBM Osprey processor in 2022. And for this, the team is already hard at work with the next generation of scalable input/output that can deliver signals from room temperature to the cryogenic temperatures. We're on track.

Back to you, Jay. Thanks, Jerry. As I said, quality is a measure of how good our technology is at implementing quantum. This includes effects like material loss and other imperfections, as well as control and readout errors. We measured this with a metric we introduced to the world in 2017 called quantum volume. And since then, we set ourselves the goal of doubling our quantum volume every year.

So far, we've doubled it every year, and we currently have a quantum volume of 128. To continue at this pace, we need a constant drum beat of scientific breakthroughs. So here to explain these is Matthias Steffen.

Matthias. Hey Jerry, how are you? Jay. Thank you. Today, I really would like to highlight major progress on the first two topics, coherence and gate fidelity.

As a result of many hard hours of work and ingenuity, we've had a breakthrough with our new Falcon R8 processors. We have succeeded in improving our quibit T1 times dramatically from about 0.1 milliseconds to 2.3 milliseconds.

This breakthrough is not limited to one-off chip. It has now been repeated several times. In fact, some of our clients may have noticed the device map showing up for IBM Peekskill recently. This is just a start.

We have tested several research test devices, and we're now measuring 0.6 milliseconds closing in on reliably crossing the one millisecond barrier. We have also had a breakthrough this year with improved gate fidelities. Here, you can see these improvements color-coded by device family. Our Falcon R4 devices generally achieved gate errors near 0.5 times 10 to the minus three. Our Falcon R5 devices that also include faster readout are about 1/3 better.

In fact, many of our recent demonstrations came from this R5 device family. Finally in gold, you see some of our latest test devices which include Falcon R8 with the improved coherence times. You also see several other measured gate fidelities for other devices, including our very recently Falcon R10.

Falcon R10 is very exciting, as we have measured a 2Q gate breaking the 0.001 error per gate plane. So now in addition to the announcement that we've broken the hundred qubit barrier with Eagle, we have a second major announcement. We're proud to say in recent weeks, we've broken the 0.001 error per gate barrier. We're now below that 0.001, which corresponds to over 1,000 gates per error. We look forward to continued progress over the coming months.

Thank you. And back to you, Jay. Thank you, Matthias.

As quantum computing evolves and we see even greater improvements in scale and quality, we start to care more about the useful work our systems can do in a reasonable time. If we measure scale by the number of qubits and quality by the quantum volume, then quantum processing speed is a measure of the useful work those qubits can do in a reasonable time. We define it as the number of primitive circuits that can be processed in a second. Similar to FLOPS in classical computing, the number of floating point operations per second, improving QPU speed is key to practical quantum computing. So with that, I'd like to introduce Katie to the stage to take us through our news here.

Katie. Thanks, Jay. Jerry. As Jay mentioned, as quantum evolves, we start to care about useful work our systems can do in a reasonable amount of time. There's no getting away from it. Useful quantum computing requires running lots of circuits. Most applications require running at least a billion.

So if it takes my system more than five milliseconds to run a circuit, it's simple math, a billion circuits will take you 58 days. That's not useful quantum computing. That's practically waiting between Halloween to New Year's Eve. So increasing the QPU speed is essential for bringing practicality to quantum computing. At the lowest level, QPU speed is driven by the underlying architecture. This is one of the reasons we chose superconducting qubits.

In these systems, we can easily couple the qubits to the resonators and the processors. This gives us fast gates, fast resets and fast readouts, fundamentals for speed. Take the Falcon R5 processor for example. This is a huge upgrade over the Falcon R4. With the R5, we integrated new components into the processor that have eight times stronger measurement rate than the R4 without any effect in coherence.

This allows the measurement rate to be a few hundred nanoseconds compared to a few microseconds. Add this to the other improvements we've made to gate time and you have a major step forward with the Falcon R5. So this would make our third major announcement today.

In addition to breaking the a hundred qubit barrier with Eagle and the three nines gate error rate Matt just talked about, we're officially moving from Falcon R5 from our exploratory system to a core system. That's right. We know our users have been dying to get their hands on this system. Now they can. With stable, reliable Falcon R5 technology as the core, we're really looking forward to seeing what our users and partners can do with all this extra performance.

In 2016 when we put the first quantum computer on the cloud, we created a simple programming model where circuits could be sent directly to the quantum computer. This was really all that was needed when quantum computing was just starting out. But this architecture only allows the QPU to work at 5% to 10% efficiency. A GPU works with a CPU and runtime software to fully utilize this power. We see our QPUs working exactly the same way.

To get the most out of quantum processors, we need to bring some classical computing close to the QPU. In May, we released the Qiskit Runtime in beta. We created Qiskit Runtime to be the container platform for executing classical codes in an environment that has very fast access to quantum hardware.

In fact, Qiskit Runtime completely changes the use model for quantum hardware. It allows users to submit programs of circuits, rather than simply circuits to IBM's quantum data centers. This approach gives us 120-fold improvement in performance overnight.

A program like VQE, which used to take our users 45 days to run can now be done in nine hours. This leap forward in performance combined with our 127 qubit Eagle processor means that from this point on, no one really needs to use the simulator anymore. So our fourth major announcement today is that all of our devices now support Qiskit Runtime. Now everybody can take advantage of our untouchable quantum performance.

Back to you, Jay. Thanks, Katie. So that's another step in our roadmap checked off.

To summarize, in order for adoption to accelerate, we need to also focus on the useful work quantum computers can do. We define this as performance, and it's a combination of Scale, Quality and Speed. In order to be successful, we need to constantly improve all three.

And so far this year, we've done just that. But there's a bigger story here. As we take quantum computing out of the lab and create a real business, we have to look at the value we create for our clients. This is driven heavily by performance we can offer our clients, but it's not the only thing. Of course, being scientists and engineers, we love equations.

So here's another one for you. We see client value as being driven by a combination of performance, which we already talked about, plus the capabilities of the system. By capabilities, we mean the things that quantum computers can do to move us towards quantum advantage, things like error mitigation, circuit knitting and other fun stuff we're gonna talk about in a minute. The other variable that drives client value is ease of delivery and use, or what we call Frictionless. Our aim is to effectively eliminate all barriers to adoption and usage, making quantum computing work the way the world already works. So next, I want to introduce Sarah Sheldon to the stage to talk about our recent development in capabilities.

Welcome, Sarah. Hi, Jay. Hi, Jerry. As Katie mentioned, we're seeing amazing results by using powerful combinations of classical and quantum resources. What Katie described involves squeezing more performance from our QPU at the circuit level, by combining it with classical resources to remove latency and increase efficiency.

We call this classical with a little 'c'. But that's not the whole story. We've also discovered we can use classical resources to accelerate progress towards quantum advantage and get us there earlier. In other words, to drive the development of our system capabilities. To do this, we use something we call using classical with a capital 'C'. And this is what I wanna talk about today.

This year, we are diving into research into capabilities, which is earlier than we'd first planned. These capabilities will be both at the kernel and algorithm levels. We see them as a set of tools allowing users to trade off quantum and classical resources to optimize the overall performance of an application. At the kernel level, this will be achieved using libraries of circuits for sampling time evolution and more. But at the algorithm level, this is where it's getting exciting.

We see a future where we're offering pre-built Qiskit Runtimes in conjunction with classical integration libraries. Let's look at these capabilities more closely. We call this circuit knitting. This class of techniques decomposes a large quantum circuit with more qubits and larger gate depth into multiple smaller quantum circuits with fewer qubits and smaller gate depth.

Then it combines the outcomes together in classical post-processing. This allows us to simulate much larger systems than ever before. We can also knit together circuits along an edge, where a high level of noise or crosstalk would be present.

This lets us simulate quantum systems with higher levels of accuracy. This year, we demonstrated circuit knitting by simulating the ground state of a water molecule using only five qubits with the specific technique of entanglement forging, which knits circuits across weakly entangled halves. With circuit knitting, we can boost the scale of the problem we can solve, or the quality of the result by trading off our speed. So for this reason and many others, we need to have an excess of speed.

And that's exactly what Qiskit Runtime gives us by maximizing the utilization of our fast superconducting quantum processors. Here are some other examples. With dynamic circuits, we use mid-circuit measurement and feed forward to reduce the depth of a circuit through short depth preparation of entangled states.

Dynamic circuits also enable more efficient use of qubits by freeing them up for reuse. So the quality scale and speed of the circuit can all be boosted. There is no trade-off. For all these reasons, we see a huge role for dynamic circuits. They're one of the major steps on our roadmap for next year.

Air mitigation techniques use additional quantum circuits that either amplify the noise as shown here, or calibrate the noise followed by a classical computation that finds the mitigated expectation value. Both air mitigation and circuit knitting borrow from the speed bucket in order to boost the scale of the problem that can be addressed or boost the quality of the results. As we look to the future, we will continue to explore these trade-offs by borrowing from scale to encode logical qubits from physical qubits and implement error-corrected circuits. The plan is to combine all of these capabilities to optimize the performance as our quantum systems keep improving. Back to you, Jay. Thanks, Sarah.

Now I want to share some exciting news about what we call frictionless. Let's bring Katie back to talk more about this. Katie.

So this brings us to our fifth announcement for today. Today we announced Quantum Serverless. The next step for Qiskit Runtime. Quantum Serverless will allow us to incorporate all of the advanced capabilities Sarah just covered into a simple developer experience with all the speed and flexibility you already have with Qiskit Runtime.

As a developer or a researcher, you should not have to set up classical resources. You wanna focus on the code, not the infrastructure, not the servers, not the container management, and any of that stuff. You want the classical resources to scale inherently when you need them, but pay for only what you use.

And you still want the flexibility to change your code and select whatever classical hardware you wanna work with, whether it's HPC, GPUs or whatever. So in partnership with IBM Cloud, we're gonna now show you an example with Code Engine and how it can be combined with quantum computing to make Quantum Serverless a reality. The first step is to define the problem. In this case, we're using VQE. Secondly, we use Lithops, a Python multi-cloud distributed computing framework to execute the code. Inside this function, we open a communication channel to the Qiskit Runtime and run the program estimator.

As an example for the classical computation, we use the Simultaneous Perturbation Stochastic Approximation algorithm. This is just an example. You could put anything here. So now the user can just sit back and enjoy the results.

As quantum is increasingly adopted by developers, Quantum Serverless enables developers to just focus on their code. Without getting dragged into configuring classical resources, they'll have more time, be more productive, and have more opportunity to iterate towards success. Back to you, Jay. Thanks again, Katie. We've completed our roadmap for 2021. We're also on track for our goals for 2022, and we are confident that in 2023 will be an important year.

Simply put, after to 2023, assuming we continue at the same rate, we are in the land of quantum advantage. We say the land because we don't think quantum advantage will be a fixed point in time or a specific event. We see it more as a continuum, where applications start off abstract and esoteric and become slowly more useful over time. So coming back to our value equation.

If we have a pretty clear idea of our performance, our capabilities and the barriers to entry and adoption, then we should be able to start to extrapolate client value going forward. So I'd like to introduce our friend, Matt, from the Boston Consulting Group to join us on stage. Over to you, Matt. I'm Matt Langione, Principal of the Boston Consulting Group, and one of the leaders in our deep tech mission.

Over the past three years, we at BCG have had hundreds of conversations with potential end users of quantum computing in boardrooms, out in the field, in government labs. We've worked hand in hand to find high value use cases for quantum computers when they mature. But the natural follow-up question, when in fact, will they mature has been too difficult to answer with the right level of specificity. Until now. The new roadmap that IBM has laid out, allows us to chart a path to value that ties the computational complexity of real industry use cases to specific milestones and solution maturity that will play out over the next few years. The results are very, very promising.

IBM's roadmap is not just concrete. It's also ambitious. We think the technical capabilities that Jay and team have outlined today will help create $3 billion in value for end users during the period described. And they set the groundwork for much more to come.

Take financial services, where portfolio optimization is limited today by the type of constraints that classical optimizers can handle at scale. They struggle with non-continuous non-convex functions, things like interest rate yield curves, trading LATS buy-in thresholds, transaction costs. Adding these constraints to the calculation makes the optimization surface too complex for classical optimizers. But it also improves trading strategies by as much as 25 basis points with gate fidelity at four nines by 2024. With runtimes that integrate classical resources and have error mitigation built in, we believe this is the sort of capability that could be in trader workflows by 2025.

So too for the mesh optimizers to power computational fluid dynamics for aerospace and automotive design. In the next three years, quantum computers could already start powering past node limits that constrain surface size and accuracy today. IBM can't do it alone. It will take an entire ecosystem to specify high value problems and design the right solutions.

But a concrete roadmap like this gives developers, business leaders, and investors confidence that the engine that powers it all is only getting stronger and stronger. Thanks, Mat. Thanks, Jay. The future is looking good, and not as far away as we once thought. With all this talk of the future, it's easy to forget the quantum computing is already firmly here in the present. As of today, we have quantum system ones deployed across three continents, with 22 systems in our data center in New York alone.

This year, we launched Quantum System One in Fraunhofer in Germany. And then again, a few months later at the University of Tokyo in Japan. Since installation, both sides have activated the local ecosystems. Scientists are doing novel research. Customers are being signed up, adoption and usage numbers are up by a factor of 10.

We are also very excited about our work with Cleveland Clinic, which will be our first managed quantum system installed in a private company. And finally, with the recent news about our partnership with Yonsei, I am confident more scientists using these machines, more capabilities will be discovered, and more applications of quantum computing will emerge. One of the most challenging aspects of quantum computing is balancing the needs of the present with those of the future. The years of 2024 and beyond will be a strange, unpredictable world.

There will be a Cambrian explosion of demand for different configurations of quantum systems, so the success of the applications of quantum computing will not be defined by a singular standalone system like the IBM Quantum System One. We've designed Quantum System Two to fit into this dynamic, evolving future. It's less of a system than a modular architecture designed to scale and fit the needs of many different future requirements. So finally, I'd like to hand over to Jerry to give us a glimpse of the future.

Over to you, Jerry. Thanks, Jay. So our Quantum System One has become iconic in terms of its look.

What I'd like to do is actually take a step back and ask what exactly is a quantum system? And this is where there's often a common misconception. It's natural to just tie a system to a set of number of qubits. So I'm often asked, is System One a Falcon? Or is it a Hummingbird? Well, in both cases, the answer is yes. The reason is that System One is the combination of components and wiring within a cryogenic platform, combined with control electronics, all to support families of processors. In the case of System One, the combination of these various technologies enable our Falcon, Hummingbird and stretch to handle Eagle. And so really, where we are today is the closing of the chapter on Quantum System One, as we prepare for our next generation, IBM Quantum System Two.

We are actively working on an entirely new set of technologies, from novel high density, cryogenic microwave flex cables, to a new generation of FPGA-based high bandwidth integrated control electronics. And on top of that, I'd like to announce a really important partnership. I actually think this is like our sixth announcement today. But IBM Quantum and our partners at Bluefors together are imagining the next generation of cryogenic infrastructure that will help to enable our Quantum System Two.

Today, we have a special guest traveling here all the way from Finland. I'd like to introduce Russell Lake, the quantum technology lead for Bluefors to tell you all about their newest cryogenic platform. Russell.

Thanks, Jerry. Controlling more qubits than ever before motivates a new way of thinking about quantum measurement systems, starting with the cryogenic system itself. To meet these challenges, Bluefors introduces a new cryogenic platform, which we call KIDE. In Finnish, KIDE means snowflake or crystal, which represents the hexagonal crystal-like geometry of the platform that enables unprecedented expandability and access. Even when we create a larger platform, we maintain the same user accessibility as with a smaller system.

This is crucial as advanced quantum hardware scales up. We optimize cooling power by separating the cooling for the quantum processor from the operational heat loads. And in addition, the six-fold symmetry of the key to platform means that systems can be joined and clustered to enable vastly expanded quantum hardware configurations. Bluefors presents KIDE as the platform that will meet the new requirements for cryogenics, especially on IBM's quantum roadmap. Back to you, Jerry.

Thank you, Russell. So all these pieces, the novel wiring inside, the evolved control electronics, and the world's most advanced commercial cryogenic platform are all going to combine to create our future systems. Here is our design concept rendering for IBM Quantum System Two, which we're designing right now to house quantum processors such as Condor. Thanks, Jerry. As you can start to imagine, the modular nature of IBM Quantum System Two will be the cornerstone of the future quantum data centers. As I said at the start, this is huge.

We've made a lot of announcements this year. One, we've broken the a hundred-qubit barrier, which puts us into unchartered territory. Two, we've broken the gate error barrier. Three, we're bringing Falcon R5 from exploratory to our core systems. Four, all of our systems now support Qiskit Runtime. Five, we introduced quantum serverless.

And six, we've shared our plans for Quantum System Two and our partnership with Bluefors. We have 433 qubits planned for next year and over a thousand qubits the year after. And it's not just scale. We've taken tremendous strides forward in performance through increases in QPU speed and quality this year. And we're planning even bigger strides next year.

Dynamic circuits are a big deal. They'll bring the power of real-time classical computations to circuits. And as Sarah said, they will improve performance in a myriad of ways. And we will have more to say on this soon.

We've seen tremendous growth in both capabilities and frictionless programming. All of these drive value for our users, with our friends from Boston Consulting Group painting a very promising future. The theme of this year's Quantum Summit is New Dimensions, and we chose that theme for a good reason. Our dream is to be able to create machines that compute higher dimensional mathematics at scale, so we can start to tackle some of our biggest challenges. It's no longer a dream. It's a destination, and we're on our way.

Thank you.

2021-11-19 13:56

Show Video

Other news