Webinar Q and A follow up: Practical considerations in the design of quantitative targeted methods in proteomics


Thank you everyone who attended the live webinar: ‘Practical considerations in the design of quantitative targeted methods in proteomics’. Below are the responses to the questions posed during the live event that we did not have time to answer. We hope this is a useful resource and thank our attendees and our speaker, Sandra E. Spencer (University of Washington, USA), for their time.

Some lingering problems in BUS proteomics: there is no guarantee one is seeing all possible peptides, and using that collection may mislead final identification of the actual protein from which it came from. TDS, MOS, MDS are other possible approaches to doing proteomics besides BUS. Proteomics papers are not shown to be repeatable or reproducible, with work only being carried out once, is that still true today? Also, there is no method validation reported for proteomics methods in the literature, why is that true still?

It is correct that only some peptides are measured for each protein. We use a peptide as a proxy for the protein just as we use an antibody to a domain as a proxy for a protein in an immunoassay. We do put ‘uniqueness’ constraints on the peptides we use for quantitative work. For example, if I want to measure a specific protein (rather than a family of proteins) I make sure that the peptide is unique in the background I’m working in (usually human) and any other species I’m using as a background (e.g. chicken). There are now many fully validated targeted protein assays utilized in the clinical laboratory. One of the classic examples is thyroglobulin that has been shown to be more reproducible between labs and overcomes the problems of the associated immunoassays. If you look at the selected resources page, there is the   assay portal that houses many proteomics assays validated as we discussed in the webinar and more detail on how this is done can be found in the references cited on the ‘selected resources’ slide.

Where we have targeted proteins we have found increased peptide recoveries with optimized conditions including multiple additions of protease. Do you aim for 100% proteolysis and how do you measure and achieve this?

This is definitely a way to increase the amount of peptide released from the protein. We are not aiming for 100% proteolysis; we don’t think this is really going to happen, but instead we consider ‘complete’ proteolysis using our sample as a reference. We measure this with our digestion time course curve, where we are looking for the target peptide to be completely released from your protein under the conditions that you expect to use.

Are there any drawbacks by using dimethyl sulfoxide (DMSO) in the mobile phase?

DMSO can give approximately 2 – 3 fold improvement in signal but is expected to change the chromatography, for example, if it’s in Solvent A, all your peptides will elute earlier because you have added an organic to your aqueous. There’s also a concern about the safety of spraying this compound into lab air (e.g. when using Electrospray ionization (ESI)) as it can pull other potentially toxic molecules through human skin. I’m not sure how probable this is after it’s been vaporized but it is still possible. A few percent of DMSO also is known to dirty your instrument faster – it coats the optics and you can get some charging issues and have to clean the instruments more often.

Do you have any advice on handling practicalities to minimize losses, such as choice of tips, microtubes or autosampler vials?

For applications where protein loss will cause a significant change in signal you can use protein low-bind tubes and tips. Even with these, there is expected to be a significant amount of sample loss if you are working at really low concentrations. Another thing you can do here is have your low abundance sample in a higher abundance protein background matrix. This can be your matrix of interest and even something like albumin and can help to reduce loss of your protein of interest.

We found filter aided sample preparation helped overcome matrix proteolysis effects. Have you used this and does it work quantitatively?

We have used Filter-aided Sample Preparation (FASP) in the lab and the utility of this method for targeted work is really sample dependent. FASP is good for situations where there’s a bunch of stuff we do not like in the sample, such as: detergent, protease inhibitor and glycerol. If you really have to use it and are really careful, FASP can be used for quantitative work. However, there are some real drawbacks. Just like with SPE (e.g. the MCX example I showed in the webinar) we’ve seen different people get different results and since you’re doing your digestion on column the concern about differences cartridge-to-cartridge is much greater than when you are trying to clean up peptides. It’s overall very low throughput and we have not had much success getting it going on a robot in plates, we lost a lot of sample and got bad results. In general we do not see a global improvement for our typical sample types (e.g. yeast, plasma, CSF, fresh frozen tissue) so unless we really have to do this more time consuming and variable protocol we don’t. In addition, with targeted work you have the option of just picking different peptides if one does not work well with your more streamlined and reproducible approach, something with faster digestion kinetics for example.

Do you have any experiences with finding significant differences among similar samples looking at the whole LC-MS peptide map data?

We have looked at global proteome using discovery techniques including data-independent acquisition (DIA) and data-dependent acquisition (DDA). Thank you for bringing this up, I didn’t talk about it because it falls outside of the purview of ‘targeted’ proteomics but is extremely important in terms of quantitative assays. We use essentially the same sort of validation steps that I discussed for targeted experiments but on a many more peptides –  instead of tens you might be looking at thousands. This makes the data work-up much more difficult. To make this assay quantitative you still need to establish linearity for each of the peptides you are interested in measuring. This is a really important note here, you may get a statistically significant change in signal for a protein but until you validate your assay you can’t look at it quantitatively, doubling the signal isn’t necessarily doubling the amount of protein. The way to get at this is to work backwards or inside-out, look at what peptides are potentially changing in your big mixture (do your discovery experiment) by DIA/DDA, but then develop a targeted experiment for those you’re interested in, buy your standards, and develop and validate your quantitative assay. There are definitely some limitations to doing quantitative work using DIA or DDA; nothing really beats targeted analysis on a triple quadrupole mass spectrometer right now.

Have you compared quantitation of any components using alternative strategies such as stable isotope labeling using amino acids in cell culture (SILAC) vs isobaric tags for relative and absolute quantitation (iTRAQ), absolute quantification (AQUA) or externally calibrated. Are they generally similar?

I haven’t personally used any of these methods but what I can say is that they aren’t quite comparable because they’re meant for different situations. SILAC and iTRAQ are best suited to looking at the global proteome rather than a targeted experiment. This is because you are taking the time, cost, and effort to label all the proteins then diluting your sample with different systems before you analyze it. It is really more than we would want to do for a targeted experiment. That said, we would consider SILAC to be an internal standard technique for cell culture – just grow it up in 15N Nitrogen-15 and you have a background matrix that is nearly identical to what you are using. iTRAQ also isn’t great because you can do a maximum of 10 samples. AQUA is essentially what we are doing here, adding a stable isotope labeled internal standard peptide–but it should be noted that this is not a calibrant; we can’t say anything about the protein concentration. To trace a concentration back to a protein, you need to calibrate your assay with the reference protein (not a peptide) so that you can take into account some of those concerns you’ve been asking about– solubilization, protein loss to sample tubes, and ‘complete’ vs. ‘100%’ proteolysis.

Click here to watch the webinar on demand.