WEBINAR | Mass Spectrometry Past and Present - Emerging Technologies and Strategies for Quality Mana
Hello, everyone, and welcome to today's live broadcast, Mass Spectrometry Past and Present, Emerging Technologies and Strategies for Quality Management in Today's Clinical Laboratory. Presented by Dr. Sadrzadeh, Dr. Orton, and Dr. Boyd from Alberta Precision Laboratories. I'm Benjamin Dugas, Global Senior Marketing Manager for Clinical Diagnostics at Waters Corporation, and I'll be your moderator for today's event.
Today's educational web seminar is brought to you by LabRoots and sponsored by Waters Corporation. For more information on our clinical solutions, please visit us at waters.com/clinical. At Waters, we understand that clinical diagnostics is more than collecting data. It's making a difference in someone's life. This is why we provide clinical LC-MS/MS solutions that you can trust in every step of the way. Now let's get started.
I'd like to remind everyone that today's event is interactive. We encourage you to participate by submitting as many questions as you want at any time that you want during the presentation. To do so, simply type them in the Ask a Question box and click on the Send button. Additionally, we'll be conducting a few polls and appreciate your participation. In fact, we're going to send off the first poll now. If you have trouble seeing or hearing the presentation, click on the Support tab found at the top right of the presentation window, or report your problem by clicking on the Ask a Question box located on the far left of your screen.
This presentation is educational and thus offers continuing educational credits. So following this presentation, those credits can be obtained by clicking on the tab at the top right, and follow the process to obtain your credits. I'd like to now introduce our three presenters from Alberta Precision Laboratories. Dr. Sadrzadeh is currently the Clinical Section Chief of Biochemistry at Alberta Precision Laboratories as well as the Clinical Professor of Pathology and Laboratory Medicine at the University of Calgary. Dr. Sadrzadeh is a widely-recognized clinical
chemist and scientist who has authored over 220 peer-reviewed articles, book chapters, monographs, abstracts, and co-editor of a book. He also directs clinical projects aimed at developing new product markers and methods using LC-MS/MS technology, of which he has implemented in different laboratories in the past 20 years. Dr. Orton is a Clinical Biochemist overseeing the Mass Spectrometry Testing Laboratory in Calgary for Alberta Precision Laboratories, excuse me, as well as an Assistant Professor at the University of Calgary.
His current research involves the development of LC-MS/MS assays to elucidate a greater understanding of individuals' responses to drug therapy and pharmacokinetics. Dr. Boyd is a Clinical Biochemist at Alberta Precision Laboratories and Clinical Associate Professor at the University of Calgary. She is the Co-Director of the Analytical Toxicology Laboratory, which includes mass spectrometry testing for urine, drugs of abuse, therapeutic drug monitoring, and endocrine markers. For a complete biography on all of our presenters today, please visit the Biography tab at the top of your screen. So with that, I'd like to hand it over to our presenters to start the presentation.
Dr. Sadrzadeh? Thanks, Ben. Hello, everybody. In this presentation, we will discuss the history of mass spec, compare the strengths and weaknesses of different types of mass spec, and understand the necessary quality matrix required to ensure assay quality and meeting the standards.
The first part of this presentation will cover a brief review of the history of mass spec. Many scientists have significant impact on the development of mass spec. However, due to limited time today, I will not be able to acknowledge the work of all of them. So I will briefly describe the work of some of the key contributors to this field. Although it was originally used in the field of physics more than a century ago, mass spec has revolutionized the practice of laboratory medicine in the past two decades.
Mass spec is an extremely powerful technology that can identify and quantify any molecule by ionizing that molecule and measuring the mass to charge ratios of its generated ions. Mass spec is currently the most specific and sensitive technique in clinical laboratory, and mass spec will continue to positively impact the practice of laboratory medicine in all disciplines. History of mass spec starts with Joseph John Thomson, a physicist who was given an outstanding opportunity at the famous Cavendish Laboratory at Cambridge University in England. He was a 28-year-old brilliant theoretical physicist who, for the first time, was given an experimental position in that excellent institution. At that time, transmission of electricity through gases was a hot topic. And Thomson chose that topic for his research.
And his assistant, E. Everett, was extremely important and had significant role in successful completion of the experiments in that laboratory. In 1897, Thomson measured charge to mass ratio of some atoms. Please note that early physicists reported charge to mass ratio, not mass to charge ratio as we currently do. Two years after that discovery, Thomson built an instrument that could measure not only charge over mass, but also charge simultaneously, and therefore, he was able to measure the mass of electrons.
Thomson received Nobel Prize in Physics for discovering electrons in 1906. Here, you also see the picture of Aston, one of Thomson's associates, which I will describe his work in next slides. The early work by Thomson laid the foundation for the field of mass spec. Thomson and his protege, Francis William Aston, in 1919, built the first mass spec to measure the mass of charged ions. Aston was able to identify 212 of 287 naturally occurring isotopes by the mass spec at that time, which was quite an accomplishment.
He continued to improve his mass spec resolution, and by 1927, Aston's mass spec was accurate to more than one part in 10,000. Aston himself received Nobel Prize in Chemistry in 1922. You can see the neon-20 and -22 ions at the lower right corner.
And here is the picture of the replica of the third mass spec made by Thomson. In early 1940s, an electrical engineer from Physics Department at the University of Minnesota by the name of Alfred Nier started his significant contribution to this field by simply bringing this technology, meaning mass spectrometry, to other scientists. Nier was a modest gentleman and ready to share his knowledge with other scientists. He built several major instruments, including 60 degree sector steel mass spec, and with E.G. Johnson, he built the Nier-Johnson mass spec. Nier also helped biologists by preparing C13 enriched carbon for them to use as tracer.
He measured lead isotopes for geochemists so they can determine the age of the Earth. Nier also contributed to Manhattan Project by separating uranium-235 from uranium-238 for Enrico Fermi. As you know, Enrico Fermi was a top scientist for Manhattan Project.
In early 1940s, physicists did not know which uranium isotope was responsible for neutron fission. Nier was able to separate few nanograms of uranium-35 isotopes. And then he sent that to John Dunning, the laboratory at Columbia University, and Professor Dunning was able to confirm that uranium-235 was indeed the isotope responsible for fission and can be used to make atomic bomb. Up to 1940, mass spec was mostly used by physicists and industrial chemists.
No one really tried to understand what went on inside that black box. In mid-1950s, three chemists started to use mass spec in their research and tried to better understand this technology. Fred McLafferty focused on the mass spec instrumentation and methodology. He established the rules and languages of mass spec with compounds of known structure. Klaus Biemann from MIT used mass spec on alkaloids and peptide and was among the very first to show that mass spec can be used to identify the structure of unknown complex natural compounds. Carl Djerassi was an established scientist at Stanford University who focused mostly on the steroids and terpenoids and came to the field of mass spec later than the other two gentlemen.
In 1960, Djerassi invited Professor Biemann to go to Stanford and help him set up a mass spec lab and continued his research on steroids using mass spec after that. It wasn't until the 1980s that mass spec was used in the clinical laboratory. In 1981, a military plane crashed on an aircraft carrier, and the pilot's urine tested positive for marijuana. In response to that accident, President Reagan enacted a zero-tolerance for drugs of abuse in the military, which then required testing. At that time, drugs of abuse were tested by immunoassays. Immunoassays, in general, are not very specific and can generate false positive result that must be confirmed by a mass spec assay.
And in 1980s, gas chromatography mass spectrometry was mostly in clinical toxicology lab. Thus, this requirement of testing for and confirming the positive results drove mass spec in toxicology laboratory. By 1980, although scientists could ionize small molecules and measure them in mass spec, they could not ionize large molecules like proteins and other complex carbohydrates without breaking them and fragment them in different pieces. In 1988, scientists at different parts of the world, and almost at the same time, develop different technologies to ionize large molecules such as proteins. John Fenn and his team at Yale University discovered electrospray ionization.
Also, Koichi Tanaka, who was an engineer at Shimadzu company in Japan, with the help of his research team was able to successfully ionize large proteins using laser desorption. John Fenn and Koichi Tanaka shared the Nobel Prize in chemistry by 2002, and it was given to them for their development of soft desorption ionization method for mass spectrometry analysis of biological macromolecules. At the same time that Fenn and Tanaka were working on protein ionization, two German scientists from the University of Frankfurt, Franz Hillenkamp and Michael Karas, were also working on protein ionizations. They used different technology, called laser microprobe mass analyzer, that helped them successfully ionize large protein molecules.
They called their new technique matrix-assisted laser desorption ionization, or MALDI for short, and published it in 1985. These advancements made mass spec much more user-friendly and provided great opportunity to all scientists to finally be able to use this technology without devoting their career working on different techniques. As I mentioned at the beginning of my presentation, because of the time limitation, I could not cover the work of all the great scientists who one way or other contributed to the development of mass spectrometry. I'm sorry that I wasn't able to mention them all. It was not intentional. I'd like to thank you for your attention, and now I will pass this to my colleague, Dr. Jessica Boyd.
Thank you. Thank you very much, Dr. Sadrzadeh. So my section of the talk will be on the basic principles of mass spectrometry and the applications of this technology in the clinical lab. So this slide just shows some of the applications that mass spec is currently used. You can see that there are many applications currently in use in chemistry, and of course in microbiology, mass spec is used for bacterial identification. If you were to boil down mass spec to its most basic components, you can kind of put it into four different boxes, which I've put here.
Of course at the beginning, you usually need to do some kind of sample preparation in order to clean up the matrix, concentrate the analyte, or put the sample into a form that can be introduced into the mass spec. This can be done online or offline. Then when the sample is ready, we need to ionize it so that it can be seen by the mass analyzer.
The mass analyzer itself is then able to filter the ions so that we can control which ones are actually hitting the detector at any given time, which then provides us the data that is so useful and that we would then analyze and put in a form that is reportable to physicians. I'm going to go through these three blue boxes here very quickly over the next few slides, starting with sample introduction. So this slide shows some of the ionization types that are present in the clinical lab. And you'll see that I have organized them by the state that the sample is in. The top line, we have gas chromatography. Typically here, we're using either electron impact or chemical ionization sources.
And the EI, in particular, has been used to great effect for analysis of drugs of abuse in urine. Liquid chromatography is really coming into vogue in the clinical chemistry laboratory, and in my lab, for example, this is what we're using most commonly coupled to an electrospray ionization source. Of course, we have other options such as an APCI source, which are used for analytes that are more lipophilic and perhaps don't ionize as well with ESI. And then finally, we also have solid state analysis, and a good example of this is MALDI. So just to spend a minute on electrospray ionization because it is so prevalent in clinical chemistry, on the left is a diagram of how ESI works. You'll see on the very left that there is a tube with liquid spraying out of it.
And this is the ESI capillary that is coupled to the end of your LC system. In the mobile phase, modifier and pH adjustments should be done so that your analyte is ionized, and then a current or voltage can be applied to the ESI capillary so that it nebulizes that sample, forming droplets containing charges, and you can see that at the bottom of the picture. Then within the source, drying gas and heat can be applied to evaporate the solvents so that ultimately and ideally, what you have entering the mass spec are single, individual, desolvated ions. Compared to MALDI, where we have a solid sample, here in this picture you can see we have our analyte depicted with green circles, and it's embedded in an ionizable matrix, which is depicted by the blue circles. When a laser is pointed at this mixture, the ionizable matrix will ionize, and it will transfer its charge to our analyte of interest so that it is able to enter the mass spectrometer and be analyzed.
So now we'll move onto the mass analyzers. So it's important to remember that mass analyzers manipulate ions. And so that's why we need to make sure that we have a good ionization source sitting in front of our mass spec.
There are many different types of mass analyzers available, each with their own strengths and weaknesses. Today, I will only be able to touch on quadrupole and time of flight analyzers. And we're also able to put mass analyzers in series to form a tandem mass spectrometer. And this really increases options for the types of scans that we can do to look at fragmentation patterns.
And this really gives mass spec its edge in terms of having higher specificity compared to other methods we might use in the clinical chemistry laboratory. Just to show how this fragmentation patterns can be used to help identify analytes, here I'm showing a GC electron impact spectra of cocaine. Now, electron impact is a hard ionization source, so we get fragmentation of the parent ion in the source. And so when all those ions move to the mass analyzer, you can see that we have a mixture of parent cocaine as well as fragments of cocaine. Now, we know that if we always apply the same voltage in EI, that we're able to actually reproduce the spectra every single time we do that analysis, both in terms of what's produced and the intensity of what's produced. And so for GCMS, this has been pivotal in helping us produce libraries that can then be used to help us identify drugs, for example, in patient samples.
In LC-MS, although we don't necessarily have the same kind of library system that we do for GC, you know that when you do your method validation and you determine what a collision energy is for triple quad, for example, that if you apply that same collision energy under the same conditions every single time, you will be producing the same fragment ions and intensities. And we can use that as an extra layer to help identify compounds in our samples. So this is a picture of a quadrupole mass analyzer, and of course it gets its name from the configuration of the four rods on which electrical fields are applied so that we can affect the trajectory of ions through the rods.
In clinical labs, and particularly in my lab for example, we see this in a triple quadrupole configuration. And so there's a picture here at the bottom of the slide. You can see that there are three quadrupoles in series. The Q1 and Q3 are what we use as a mass filter, and Q2 is used as a collision cell so that we can fragment ions coming in from Q1.
The scan type that we use almost exclusively in many labs is called multiple reaction monitoring, and using a triple quadrupole, how that would work is that in Q1, we would select for the parent ion of a particular compounder drug, for example. We would then send that into Q2 and fragment it with nitrogen or argon gas, and then in Q3, we would select at least two, usually, fragment ions that we would then monitor and send to the detector. Each of these combinations of a parent and a fragment ion are called an MRM transition. And like I said, we usually monitor at least two of these for every analyte in our method. To give you an idea of what that looks like practically, this is a snippet of one of our reports from an LC-MS/MS MRM drug screening method that we run in my lab, and this is for benzoylecgonine, which is a major metabolite of cocaine. And so we have three pictures here.
The first is a chromatogram of what we would call our quantifier chromatogram, or quantifier MRM transition. And this is a transition that we would have identified during method validation as often the highest intensity one, but not always. And so you can see at the top of that box it says to 290.1 to 168.1, so that is the transition we're monitoring. And so our staff would look at this, of course assess the chromatography and the retention time to determine whether we think this is acceptable for benzoylecgonine identification. But we don't stop there.
We would then go to the second box, and this is actually an overlay of the chromatograms from two MRM transitions. So again, if you look at the top of that box, you'll see our quantifier transition of 290.1 to 168.1, but you'll also see one in blue that is 290.1 to 105.1. And this is our qualifier MRM. And we use this as an extra confirmation to determine whether our drug is actually present in the sample. Based on how we set up the software for this particular assay, we've asked the software to actually normalize the two chromatograms so that if we have a good match between the two, they should actually overlay, and it should actually look like one line.
And that's actually what it looks like here. If benzoylecgonine was not present in this sample, it would be quite clear if there were-- because we would have two lines visible, and they wouldn't overlay at all. We'd have different bumps and lumps and things. This is also helpful to identify if you have a coeluting compound in your chromatogram or one of the MRM transitions, so one of these might appear to have a bump on it that the other doesn't. And then of course, in the third box, we have an MRM transition monitored for our internal standard, which in this case is benzoylecgonine-D3. Here's a picture of a time of flight mass analyzer.
And so the difference here is that it contains a flight tube where ions are actually going to fly through. They'll go to the bottom and be reflected by a reflectron and then come back and hit the detector, which is on the right-hand side there. Mass is measured in this analyzer based on the time it takes to do this transit. And smaller ions are able to do this faster than larger ones.
So by measuring the time of transit, we are able to then calculate the mass of that original ion. Now of course, ToF is often coupled with MALDI for use in the clinical microbiology lab, and so here's just a picture putting that all together. You can see on the left, we have our sample culture, which would then be mixed with the blue matrix and then spotted onto a multi-ToF sample plate.
That plate, if you then look in the picture on the right, can be introduced into the TOF instrument, hit with the laser beam, and then you can see the ions going through the flight tube to the detector. There's also some other areas that are interested in using MALDI for use in the clinical lab, particularly in histology. And so here is an example of where we would be able to analyze tissue sections on a slide using MALDI to add extra information in terms of protein composition and localization within a slide that would add extra information for the pathologist in addition to other conventional histology staining. So just to finish, I wanted to go through a few things that we found very helpful to consider when implementing new mass specs in our lab or new mass spec methods. First off, of course, selection of the mass analyzer will depend on what you want to measure with it and the type of data desired, whether you want to do full scans on everything or a targeted approach may influence what you would get. One of the major benefits of mass spec is that you can analyze many analytes together simultaneously from one sample.
However, keep in mind that everything you add to a method adds complexity in terms of running the method, maintaining the method, and doing the data analysis. And so you need to determine whether that's going to be useful or not to do that. Method validation is different than what you would do for a general chemistry analyzer, and that's whether you're doing it as a lab-developed test or evaluating a kit method from a company. And there will be more information on that in the third part of the talk.
Staff training often takes longer, as the instruments in the data analysis are more manual than automated chemistry analyzers, although we're now starting to see some solutions to help particularly with the data analysis. And finally, don't forget about the post-analytical part. Interfacing to the laboratory information system can significantly improve your workflow and your turnaround time, and you should always consider it when putting a mass spec or a new method into your lab. So with that, I'm going to pass it off to Dr. Orton. OK.
Thanks, Jessica. So hi. My name is Dennis Orton. I am the Clinical Biochemist who oversees the LC-MS testing lab in Calgary for APL.
And I'm going to focus this part of my talk primarily on LC-MS methods. So Jessica reviewed quite a few different applications, but the LC-MS is definitely kind of the workhorse of the clinical lab at the moment, so I thought I would just go over some of the applications and quality metrics that we should be monitoring therein. And as these methods become available and LC-MS becomes available in more and more labs, I feel like it's really important to focus on the vendor may be trying to sell you a kit method with, if you've seen the-- if you've seen the matrix, you kind of understand what I mean, where what they're selling you is the blue pill, but what you're actually getting is the red pill.
And my own personal experience would lead me to take the blue pill every time. But yeah. So we'll just talk about some of the LC-MS applications that we have in our lab.
OK. So basically there's so many different aspects of an LC-MS method that you can measure. You can measure every single parameter of a peak you could ever imagine.
You can look at the asymmetry. You can look at the retention times and whether or not it's tailing, peak width, et cetera, et cetera. But what quality metrics should we really be monitoring is kind of the big question. And so I'm just going to run through kind of some of the things that I've found in my experience that are the most helpful, can get the most bang for your buck. And so within every run, you're going to always want to focus on looking at things like the retention time, the ion ratios that Jessica was talking about in the previous section, and then also there's things that are important, like the internal standard peak area, pressure plots, peak shape. Those are all really important things to pay attention to throughout the course of a single run.
And I'll show you some examples of why that's important and what you can do to potentially do something about it. But then also, day-to-day, we run some system suitability samples. You should track, like, your peak areas. Should look specifically at whether or not the signal-to-noise ratio is changing over the course of the age of your method or your column or your solvents. And then I'll talk a little bit about the QC results that we're expecting. OK.
So setting yourself up for success. Basically, before you start any run, it's always important to run some system suitability samples, just running several consecutive injections of a standard that was made in pure solvent. It doesn't need to have any kind of preparation or anything like that. It's just basically checking to make sure your system is functioning before you start your run. We generally say, within our lab, especially if you have a decent run time, if it's like a five-minute run time, you should be able to get 10 injections just to make sure your system's actually up and functional.
But you may want to do fewer than that if you have a really long run time. So some of the metrics that we look at, retention time. So if your retention time is highly variable injection-to-injection, you can look at whether or not you're column is degrading, whether or not you have the correct column on, whether or not your column's even connected.
So sometimes you forget to actually plug the column into the system and so you don't waste an entire run. You can stop it now and be like, oh, forgot about that, and then do it. You can make sure your solvents are mixing well. There's no major leaks within the system. So if you see a retention time that's bouncing around quite a bit, then that's definitely some sort of leak somewhere in your system. The column equilibration is also really important.
So if your solvents aren't being given time to re-equilibrate, you may-- like if you have a really unstable retention time, you may just have to add 30 seconds on the tail end of your method to make sure that it actually equilibrates between injections. The pressure plot is a hugely important thing. So this will actually detect things like minor leaks.
So a lot of the time, your attention time may actually be stable and reproducible, even though there is a small leak somewhere. And I find that things like purge valves can get loose and those kinds of things. So it's really important to kind of track what your pressure currently is when you're starting. And this is something that you'll find when you're setting up the method in the lab. But this is something that you have to actually monitor day-to-day and make sure that if your back pressure does increase over time, you could have dirty lines or plugged guard columns, that kind of stuff, and that can affect your chromatography.
And so all you have to do is just switch out the guard column or flush a line, and you get better signal intensity. So the other thing is with your system suitability. So we actually track our signal intensity day-to-day, and this helps kind of not only monitor the chromatography, but also the solvents themselves.
So if you only run a method once a week, then your solvents can go bad real quick. Your column can degrade, which can cause issues. The mass spec stability is big, especially if you have to do things like autotunes or calibrations or whatever periodically. It's important to kind of understand how your signal-to-noise is changing over time.
Also, any, like, vacuum leaks, ion transfer issues, voltage applications, these kinds of things are all kinds of things that you have to worry about when you're running a method. So system suitability sample can look many different ways. So if I pretend like this is a situation where I'm just injecting a series of system suitability samples over and over and over again within the same batch, and if you see some sort of transition of peak from nice and pretty and sharp and good retention time to more blobby, and then finally the signal just basically drops off and you got nothing, so this could indicate anything from column degradation to loss in performance. Usually this is going to be a solvent mismatch or a lack of column equilibration. So again, just give it a little bit more time in between your actual gradient and the next injection.
This can also be caused by a back pressure increase. If you have injected a sample and it's kind of cruddy, your system suitability sample should be made up in a matrix that's very similar to the solvent that you're actually running through your system at the time. So if you're injecting pure methanol and it can cause precipitation of your solvents or your buffers and your solvents or any something along those lines within your system, which can plug it and cause these kinds of problems. So it's just really important to kind of-- you can see these kinds of things coming.
Like, you could actually stop it in the middle pane here. So if you have your nice, sharp peak normally and then you see that it's kind of blobby and no good, you don't have to wait for the tenth injection in the system suitability to stop it and do some intervention and actually get to it in a timely way. So conversely, you can have the exact same pattern of the peak areas, but with a nice, pretty peak.
And so this, again-- it's usually related to solvents or columns, improper sample preparation. It could be something is wrong with your injection valve, or there's a small leak somewhere in the system that's causing your sample to not get efficiently loaded onto the column. So this one is an especially frustrating situation if your peak still looks good, but your peak area completely dropped off. It can be extremely frustrating to try to troubleshoot that. But generally, it's going to be something like your solvent pH is not stable, or it could be the mass spec itself has got a vacuum leak somewhere.
It could be the samples not being injected properly and those kinds of things you have to check into. So when you're monitoring the internal standard over the course of the run, and this is very important to kind of derive what your acceptability criteria are during validation. And I'm not-- it's kind of without in the scope of this talk for me to talk about how to validate, how to set up a mass spec method as a lab-developed test. My focus is more the quality metrics that you should be trying to keep track of, both during clinical validation as well as during operation.
So you kind of have to come up with some acceptability criteria for your internal standard injection-to-injection to track your chromatograph and consistency, to track whether or not you have trends or issues with your sample, especially in the absence of analyte. So this is really important for things like drugs of abuse testing, where you're not going to have drugs-- not every patient sample you're testing has all of the drugs in your panel all the time. So it's really important to have internal standards kind of scattered throughout your chromatograms, so you can actually make sure that your solvents are mixing while your sample's being injected properly. Those kinds of things. You really do have to make specific note of your calibrators and QC material.
And I'll show you an example of why that is in a moment. But matrix effects can really alter your extraction efficiency or your ion suppression profile. So if you're running your calibrators and they're not representative of your patients, then you really have to question what you're actually doing with your sample prep. You can see variation in the internal standard due to ion suppression or chromatographic problems.
Generally, it's important to make note of this during your validation. Again, and I'll talk about this in a moment, is if your internal standard is being suppressed by your parent molecule, then it's very important to kind of understand if your target analyte overlaps with your internal standard as it should, that you're going to see some suppression of your internal standard when your target molecule is quite high in abundance. So it's just very important to kind of understand how your assay actually functions because that would be a part of your linearity experiment. OK. So further onto the internal standard, so basically what I showed you before was an OK example. This is a bad example of what an internal standard should not look like.
So trends are highly undesirable, and this is something that I've encountered several times in my current lab, where we have just some minor ongoing issues that need to be cleaned up. So if you see something, like you can see here in the first 10 injections or so, these are the calibrators and QC material. And then from injection number 10 on, and this is in the left panel, you can see that it's highly variable. The internal standard response is all over the place, but there's also kind of a trend start to finish.
It starts high and ends low. And when I pulled the data, I found that this was actually reproducible run-to-run, day-to-day, batch-to-batch. And everybody just kind of accepted this as normal. But it is not-- it's basically bad practice because you've got-- your calibrators are running different than your patient samples, obviously. But there's also something obviously different with your patient samples. There's something you're injecting in your patient samples that's causing differences in your chromatography over time.
And then when you see on the right panel this abnormal, you should have seen this coming. People were shocked that this happened. No, no.
You needed to see this coming from your day-to-day monitoring of your internal standard response. You know that it's dropping over time and then just all of a sudden one day, you have precipitation on the column, or your guard column gets plugged or whatever, and that causes this problem. But the main thing is, if this is what your internal standard pattern looks like, you really do have to address it using either sample preparation, different chromatography, something along those-- something is wrong, and so it's important to address it before you have such a big problem as you see on the right. So the question of, can QC help? Of course. Yeah.
QC can help. But how often to run it really depends on lots of different factors. How many repeats your lab can tolerate if something goes wrong is a big thing.
So if you're running a 10-minute drug screen method, then obviously you want to run your QC a little bit more, maybe a little bit more often because you don't want to have to repeat dozens of patient samples at 10 minutes a run. If it's a 2-minute run, maybe you can run your QC a little bit more flexibly. The biggest question is, do the quantitative results actually reflect the underlying problem? Like the-- what problems you're actually trying to figure out here. If you're tracking just the quality control quantitative results, then it's important to kind of keep in the back of your mind that you're not actually looking at what could be the actual problem.
So when you're troubleshooting your QC failures, that's something that you definitely need to pay attention to. The arbitrary recommendation is that you run QC every 10 patient specimens. I think this comes from basically GLP practices. And there's no real reason for it other than it's a good number. It's a nice round number, and it seems like it works pretty well.
But again, if you have a highly-- if you have a very high throughput method, every two minutes you're injecting a new patient sample, and maybe you don't quite need to do it every 10 patient specimens and maybe even the beginning, middle, and end. And again, does your QC actually reflect patient specimen? So if you have a lab-prepared QC that maybe has a little bit more organic solvent in there because that's how you had to make it, and you have a very different recovery from your QC than you do from your patient samples, you kind of have to ask yourself that question. If your QC is not reflecting your patient samples, then running it periodically, does that really change anything? It's also really common to run a calibration curve at the beginning and end of an LC-MS batch.
And this will kind of let you know if there's big differences in chromatographic performance or LC-MS performance or whatever, but between runs and/or within the run. And that's really important, I think. But the question of which calibration curve do you use when they're different, that's one of those things where you kind of have to address that during your validation, your clinical validation, and come up with the acceptability criteria. And in that case, you would have to use your QC that you've run throughout your batch. So onto ion ratios.
So ion ratios are the classic triple quad quality metric, as Jessica mentioned. This is like your fingerprints, right? So if I have the right retention time, the right ion ratio, the right peak shape, all is well. I can say with high confidence that this is indeed benzoylecgonine, for example.
But the acceptability criteria generally use peak area, and peak area can be influenced by a number of different factors. So if you have an overlapping peak, some interference that's overlapping with your analyte of interest and you see your qualifier peak is way higher than your quantifier and it shouldn't be, then obviously that's an interference. That's a problem. But whether or not that's a suppression of your quantifier and an increase of your qualifier or vice versa is a question you kind of have to address. And that's kind of its own problem because I don't think that modifying your chromatogram would actually get rid of this. You may have to address this during sample prep.
What's more common is you get these shoulders on your qualifier. And the dangerous thing about this is if you just rely on the ion or the ion ratio itself with the peak area ratio, then you may actually miss this because these areas may actually turn out to be just coincidentally the same as your analyte of interest. But clearly, there's something overlapping there, and that's going to be a problem. So really, use of ion ratios requires an understanding of your expected peak shape for both the quantifier and the qualifier. It's possible to still report the result if only the qualifier is affected by this interference. But you really do have to do a little bit more checking into that, and that may require repeat testing, new extractions, checking your solvents, et cetera, et cetera.
If failure is common, you may want to look at adjusting your sample preparation or chromatography, like use a new column, have a confirmation method, those kinds of things. But if all else fails, you can actually add a second or third qualifying ion that can be used when the primary ion fails. And again, all you have to do is just run through and make sure that these kinds of interferences don't overlap with all of those secondary and third tertiary kind of qualifier ions. So the question about external quality assurance or proficiency testing, obviously this is just the same as any other assay, any other chemistry assay. It prevents drift over time due to unidentified changes in your assay performance. So you just make sure you're not wandering too far from your peer group.
Pretty much all your stereotypical PT providers have LC-MS options. CAP, LGC, Oneworld are a few of the providers that I've used. But you'll notice if you've been running these things, acceptability criteria are often very what I'm going to call friendly. So the criteria are not very stringent. And so that's something you kind of keep in the back of your mind. Like really, if you are doing your own internal quality checks sufficiently, if you're checking your lot- to- lot variation in your calibrators and your QC, you should never really fail.
But the number one failure we do have is matching units. And so because we are in Canada and lots of those proficiency testing labs are in the US, we have to convert our units, and we always miss a decimal or don't carry a 1 or whatever. And so this is where coming in, it's really important to set your acceptability criteria around preparation of calibrators or even bringing in external commercially available calibrators and QC material.
It's a good idea to get them either separate lots or from different vendors. Whether or not you're making them yourself or not, it's not really important. It's important to get as much variability in there as you can so that you really aren't reliant on a single vendor wherever possible to be able to catch if there's any changes in your chromatographic or LC-MS performance over time.
So just a note on the validation. I don't want to go too deep into this, because again, this isn't really a focus on the validation side. So there's CLSI guidelines available for clinical method validation. So these things cover pretty much everything you can desire, but for LC-MS, the clinical method validation, and I'm not talking about the development side with picking your ions and everything like that, I'm talking about the clinical side. So you really do just need to shift your focus a little. You do the same exact experiments as you do with any other clinical method evaluation, whether it's an ammonia or AST, but you have to focus differently.
And so this is going to be shifting focus onto the peak areas, the reproducibility thereof. So when you do your linearity experiments, you want to make sure that your internal standard peak area is not being suppressed too much. You can purposely adjust your internal standard peak area to compensate for that and actually shift your linear range if you need to. You have to focus on matrix effects.
Obviously during manual sample prep, there's a lot of issues surrounding that, and ion suppression experiments using the post-column infusion is a good way to go about that. Precision experiments. So this is going to have to-- it's very important to kind of focus on both your peak area as well as your quantitative results. So if you have an internal standard and an analyte that behave differently during your extraction, then your precision experiment needs to look at the difference in your peak areas. And so when you see a high CV, a percent CV of the analytical result, that may actually just be due to your internal standard ratio is just completely off because your internal standard is inappropriate or it's being suppressed differently, something along those lines.
So that's where you need to focus with your precision experiments. Limited detection is actually kind of your stereotype, so there's no real difference there. But it is really important to kind of just keep in the back of your mind that you don't want your zero, your background noise to exceed 20% of your peak area for your low sample.
And that's just kind of a general rule. The interferences, there's less focus in serum indices and more focus on isobaric compound drugs, ion suppression, that kind of stuff. And so that's pretty much the gist of my talk. So I hope I've convinced you that quality metric monitoring is vital to understanding the performance of your assay.
But when I came into my lab, there was a lot of aversion to using the technology basically because they weren't familiar with it. There's a lot of handwaving arguments that kind of come up with oh, that internal standard just dropped off because I needed to gut the instrument and change all of the lines and everything I got. Well no, you just needed to alter your sample preparation. So it's really important to kind of understand how your assay's performing before you go live with it clinically and understand and have all your staff understand that as well so that you don't end up with these kind of biases against the technology or these, what I would call, extreme troubleshooting measures for simple problems.
So with adequate sample preparation and foresight, LC-MS methods should be able to be incorporated to pretty much any workflow. So I've worked in a hospital lab. I've worked in a reference lab now. And the LC-MS was applicable to both. Different tests, obviously, but it is actually-- it's doable. It's just you really do have to reinforce those differences between those high-volume chemistry analyzers and the LC-MS system.
And LC-MS expansion is happening, so understanding what these metrics are and how to use them clinically is more important than I would say ever. So I guess there's really just one acknowledgment here. It's Heather Paul who helped construct some of these slides.
But I really did want to go out of my way to say thank you to the Waters Corporation and LabRoots for having us. I'm excited, and hopefully this was an informative situation, and I guess I will open it up to questions. Thank you very much, Dr. Sadrzadeh, Dr. Orton, and Dr. Boyd for your informative information--
presentation, sorry. We're now going to start the live Q&A portion of this webinar. If you have any questions you'd like to ask, please do so now. Just click the Ask a Question box located in the far left of your screen. We already have several questions that have come in, but we will answer as many of the questions that have come in as we have time for. Before we answer these questions, we do have a couple of poll questions that are going to be sent out to you now.
And while that's happening, I'm going to go ahead and get started with our first questions. So I think I'm going to start with a question that came in about the identification of new synthetic illegal drugs. This is a topic in toxicology that comes up quite often. So maybe, Dr. Boyd, you can give us your experience on synthetic illegal drugs and measurement of those? Sure.
Yeah. So of course these are a very tricky group because they're of course changing so quickly that everybody really struggles to try and keep up with them. You know, some of these groups, we see 10, 12 new variants show up every year and every few months. And so it's really quite tricky to keep up with them because by the time we validate a method clinically, and of course usually these are lab-developed tests as Dennis was talking about, it takes time for the lab to catch up.
And by the time we have, usually they've gone to a new variant of that drug, making us have to go back to the beginning and start everything again. So it's definitely tricky. Typically, probably the technique that's used the most or the strategy is using high-resolution mass spectrometry, which we didn't spend a ton of time on today. But essentially, gathering as much information about that sample as possible in an untargeted fashion. And I, know for example, here in Alberta, our medical examiner will do that using a ToF instrument and take those scans and then save them. And so that later on, when a new drug is identified or we hear about it on a Listserv or something, they can go back and search those old spectra and look to see if those drugs were actually there.
We just didn't know that they were there at the time or we hadn't identified them as a new synthetic drug. So it's definitely-- it's definitely a huge problem, probably something, like I said, that high res mass spec is more able to deal with than perhaps the MRM triple quad information that we're using for some of our other assays. But it's going to be a continuous problem, like I said, because there's just more and more of these coming out all the time, and it's hard for us to keep up.
Yes. I can imagine. All right.
Thank you for that. The next question that I have here is regarding the fixation time and sample age affect mass spec phase recognition of peptides in cancer histology applications. Dr. Orton? Hi.
Yeah. So I can-- basically I should say this with a qualifying statement saying that I've never actually done this clinically, but I know that when you're doing things like proteomic analysis or genomic analysis when you're trying to extract RNA or DNA from fixed tissue, the fixation time does influence it, does influence how well you can actually get analytes out. But you can probably work that out during the antigen retrieval-type method that you're using, like whatever method you're using to wash off the paraffin or remove the formalin or anything like that. You can probably adapt your procedure that way to make it more robust and less influenced by the fixation process.
Great. Thank you. And while you're on the phone, I still-- I have another question for you.
The question came in that someone gets interference for steroids when patients are on supplements, despite sample preparation. Is there any idea of how to deal with that? Steroids are tricky. I think that what some people say to do is to have other multiple chromatographic methods, or you can have different columns. If you can kind of redevelop or co-develop the method using a separate column, like a column chromatogra-- or a column chemistry, then that would theoretically at least separate out the interferences, at least differently than your routine method. Yeah.
Thank you. Sorry. We got a lot of questions coming in, so I'm trying to get through them all. The next question I have, is the association of 3D gels with MALDI ToF MS a technique in disuse? I guess I could probably take that one too. So I come from a proteomics background.
I did a discovery-based shotgun proteomics for my project for my PhD. I used 2D gels using basically the isoelectric, focusing along the top, and then the gel base separation, a SDS -Page based separation along the other axis. I think the movement is to go towards more of a mud pit technology, and then so that basically allows you to label your sample using the tandem mass tags. And you can do quantitative analysis that way. The gel-based analysis used to use basically differential staining, so you would use a green tag on one and a red tag on another, and then you could see the differences in the gel itself. And then you could excise those spots and do a multi-ToF analysis and get an idea of what was different.
But I would say that technology is indeed in disuse, and that quantitative proteomics is kind of shifting more towards the mud pit, the peptide-based methods. Thank you. Another question about how to deal with ion suppression problems for most abundant proteins and phospholipids in serum for targeted analysis. Do you need to perform immunocapture or spin column? Lots of proteomics-based questions here. I would say that yes, probably. So the most commonly used proteomic method in clinical-- in the clinical lab, maybe not the most common, but the most difficult that I know of is thyroglobulin, which is an enormous protein, and you have to digest it.
And actually you use what's called a SISCAPA approach. That's a Stable Isotope enriched-- I can't remember the name of the acronym at the moment. But basically you use an antibody that's targeted against a peptide after you do your trips and digestion.
So you're not enriching the protein, you're enriching the peptide. And I think that provides you a lot more specificity in a clinical method development. In the more discovery-based proteomics, yeah, spin columns are available.
You can deplete high-abundunce proteins. You can do molecular weight cutoff filters, differential precipitation, those kinds of things. Get rid of all the immunoglobulins. There's various strategies to try to deal with that. But the SISCAPA is probably the most common in the clinical lab because you do get the most specificity from that.
Great. Thank you. I guess we'll continue for a couple more questions. I know it's getting late.
But what type of instrument-to-instrument variability would you expect to see when monitoring internal standard areas? I guess that's me again, unless Jessica, do you want to answer that one? Oh, no. Go ahead. It's your part of the talk. OK. I don't know if there's a solid answer to this because there's always subtle variation between different instruments. So we do have paired analyzers in our lab, but we don't quantify the difference between the internal standard responses.
I think it's important to kind of keep that in the back of your mind that when you're validating your analyzer, you have to keep in mind the specific performance of that one analyzer. I would never try to take the peak area from one and compare it to another. That said, they should be similar. You should be within the same ballpark. I'm not going to say within 5% or 10%. I'm going to say they should be at least-- they should perform kind of the same way between the two analyzers when you're looking at your analyte response ratios.
So it's kind of hard to answer that question specifically, but I would say that you need to be similar but not the same. Right. Great.
Thank you. So Dr. Boyd, why is multiple reaction monitoring used so prevalently-- well, I can't talk-- in clinical chemistry assays? Yeah.
I think part of the reason that it's used so prevalently for chemistry is that it is a really strong technique for doing what we would call targeted analysis, and I kind of alluded to this in the last question that was asked. But often, you can think of it running your mass spec in a targeted way in that you know what you're looking for, and you can tell the instrument just to look for those things. Or you can run it in an untargeted where we're just trying to collect as much information across the whole sample as possible. You know, there's advantages either way.
In the case of those emerging drugs of abuse, it is potentially more useful to collect all of that information, just try and make sure we don't miss anything. But for many of the analytes that we're doing in chemistry, we know what analyte we want to look at, and you know, therefore we know the molecular weight or should be able to figure out its fragmentation pattern during method validation. And so by running an MRM, we're able to really up the specificity because we are looking at that fragment transition but also help reduce some of that background noise because we tell the instrument only to look at certain things. And so for things like our metanephrines assay, we know we're looking for metanephrine, normetanephrine.
We don't necessarily care about all the other stuff in the sample. So I think that's one of the strengths. The other thing is, of course, once you kind of have one method running in MRM, then it's perhaps not as much a leap for staff to understand what you're doing if you run another MRM, even if it's for a very different analyte class, such as a TDM or something like that.
So and then third, of course, you can do that kind of scan mode very, I guess, easily or efficiently on something like a triple quad, which of course puts the instrumentation in a price point that many labs now are able to purchase that kind of instrumentation rather than going for something that's maybe overpowered for the analysis that you want to do. Perfect. Thank you. I think I want to take one last question because we are quite over time, and the questions that we didn't get to, we will answer them.
I promise you. We will do that by email with the email that you sent to us. But last question's going to be more of an educational question for Dr. Sadrzadeh. Of the four scientists developed the methods for ionization of large molecules at the same time, why-- if, sorry. If the four scientists developed the methods for ionization of large molecules at the same time, why did only two of the scientists received the Nobel Prize? Thank you, Ben. This is a question that actually has been on the minds of many scientists in the field.
Indeed, many of the recognized chemists, especially in Europe, who were invited to the Nobel ceremony did not go. And the reason they didn't go, they said that the prize, Nobel Prize went to the wrong address. They meant it should have gone to Dr. Hillenkamp and his colleague, Dr. Karas, in Germany.
But I'm going to quote Dr. Norden, who was the Chairman of the Nobel Prize Committee for Chemistry. He said that Tanaka was honored because he was the first to develop an idea that changed other people's way of thinking.
There are other people who also said that Tanaka did not only invent a new way of ionozation, it was al-- he was-- he also invented the way to really set up instrument, adjusting the detector. It wasn't just the ionization. But in general, I should say that many people were upset, and the MALDI, the method that was developed by Dr. Hillenkamp and Karas has been used far more than the method that was developed by Dr. Tanaka. Thank you.
Thank you very much, Dr. Sadrzadeh. All right. So I would like to thank the audience for joining us today. Again, we will get to the rest of the questions that are in here, but we are running a little bit over time. But I would like to thank you for your attendance and interesting questions and your engagements. I'd also like to thank our presenters for their time today and this important research.
We also like to thank LabRoots' association with us here at Waters Corporation for underwriting today's educational webcast. This webcast can be viewed on-demand. LabRoots will alert you via email when it's available for replay, and we encourage you to share that email with your colleagues and anybody who missed today's live event.
Thank you again to our presenters, and thank you to you for joining us today.