Advertisement

- About Us
- Advertise
- Contact Us
- Editorial Board
- Editorial Information
- Terms of Use
- Privacy Policy
- Do not sell my Information

© 2024 MJH Life Sciences^{™} , Cannabis Science and Technology . All rights reserved.

Published on:

*The third part in this series of pitfalls to avoid and practicalities to know about discusses the importance of validating and checking your calibration to ensure quality.*

This column continues the discussion of the practicalities and pitfalls of quantitative spectroscopy started in the last two columns (1,2). The reason that a discussion of quantitative spectroscopy matters for cannabis analysis is that many cannabis potency methods use high performance liquid chromatography (HPLC) along with ultraviolet-visible (UV-vis) spectroscopic detection (3). Also, there exist cannabis potency analyzers based on mid-infrared spectroscopy (4).

The fundamental equation of quantitative spectroscopy is Beer’s Law (5), whose form is seen in **Equation 1**.

where A is the absorbance, the amount of light absorbed by a sample; ε is the absorptivity, a fundamental physical constant of a molecule; L is the pathlength or sample thickness; and C is the concentration.

I will dispense with any further discussion of Beer’s Law or the basics of quantitative spectroscopy and refer you to my previous columns and book on these topics (1,2,5-8). In this, the third in a series of pitfalls to avoid and practicalities to know about, we will discuss the importance of validating and checking your calibration to ensure quality.

Our tendency once we have a calibration in our hands is to immediately begin using it because they are a lot of work, and we may be anxious to start getting results. This is, however, the wrong approach. Recall (5-8) that when we generate a spectroscopic calibration, we have to make up samples of known concentration called *standards*, measure the absorbance of these standards, and plot their absorbance versus concentration to obtain a calibration line. We can then use our calibration to predict concentrations in unknown samples using **Equation 2**:

where C_{unk} is the concentration of analyte in the unknown sample, A_{unk} is the absorbance of the analyte in the unknown sample, and εL is the product of the absorptivity and pathlength obtained from the slope of the calibration line.

Note in Equation 2 that the concentration and absorbance refer to the unknown sample, but that εL is derived from a plot based on the standard samples. The assumption we are making here is that εL is the same for the standards as it is for the unknown samples. This is what I call the *fundamental assumption of quantitative spectroscopy *(FAQS) because if it is violated it means εL is different for the standards and unknowns. Applying an incorrect value of εL to unknown samples will give incorrect values of C_{unk} because we are using a value of εL that applies to the standards but not to the unknowns. Therefore, violating the fundamental assumption of quantitative spectroscopy can be very damaging. So then, how do we know if we are violating this assumption? This is where calibration validation comes in.

Advertisement

To check if we are violating the FAQS we must test whether εL is the same for the standard and unknown samples. Why would εL ever be different for the standards and unknowns? The pathlength, L, can be different if the sampling cells or devices you are using for the standards and unknowns are not identical. The best way to ensure this is to use the same sample cell for all standards and unknowns. Ensuring the absorptivity, ε, is the same for samples and unknowns is trickier. Recall that the absorptivity is matrix sensitive and changes with variables such as temperature, pressure, concentration, and composition (5,6). This is why it is so important to ensure that all the experimental variables are the same when you measure the absorbance of the standard and unknown samples.

Given the vicissitudes of experiments and our inability to always perfectly control all variables, we must perform calibration validations before using them to analyze unknown samples. To test the FAQS, we make up a standard sample of known concentration but do not use it in the calibration. We will call this sample a *validation sample. *We treat the validation sample as an unknown, measure its absorbance, apply Equation 2, and predict its concentration. We then compare the predicted concentration of the validation sample to its known concentration to see how well they agree. If they agree within the accuracy you are looking for this means you can go forward implementing your calibration. If not, back to the drawing board, and more on that in future columns.

Recall that accuracy is a measure of how far away a measurement is from its true value (9). The difference between the known and predicted concentration for the validation sample is the measure of the accuracy of your calibration. It is important to use at least one validation sample to test that your calibration does not violate FAQS. It is even better to create and analyze several validation samples. In this case for each sample, like above, measure its concentration, apply the calibration, and predict its concentration value. The beauty of analyzing multiple validation samples is that it allows you to calculate what is, in my opinion, the best measure of calibration accuracy called the standard error of prediction (SEP), which is seen in **Equation 3**.

where SEP is the standard error of prediction, ∑_{i} is the validation sample number index, C_{p} is the predicted concentration, C_{a} is the actual concentration, and n is the total number of validation samples.

Let’s say there are three validation samples. To calculate the SEP the first thing to do is subtract the actual concentration from the predicted concentration for the three samples, square each of the differences, and then add these together. This gives the numerator in Equation 3. You then divide this sum by the number of validation samples minus 1, this is the denominator in Equation 3. Then, take the square root, and you have yourself a standard error of prediction. The SEP is the best measure of calibration quality because it shows you how well your calibration does on samples that are not included in the calibration, which is exactly how a calibration is used in real life. The units of the SEP will be in the units of your concentration measurement. For example, if C_{p} and C_{a} are in weight percent (Wt. %) then the SEP will be in Wt. %.

What value of the SEP tells me if my calibration does or doesn’t violate the FAQS? Strangely enough after all this math this ultimate answer is a judgement call. It depends on how accurate you need the calibration to be, that is, the needed accuracy is application specific. For example, if you want to know the total tetrahydrocannabinol (THC) in a cannabis bud that is normally around 20 Wt. %, an accuracy of ±1 Wt. % might be fine. However, if you are a hemp grower and have to ensure your crop is less than 0.3% weight total THC to comply with federal law, an accuracy of ±1 Wt. % total THC is worthless because the error bar is bigger than 0.3 Wt. % and overlaps with zero. In this case, an acceptable total THC accuracy would need to be significantly less than 0.3% total THC. For example, mid-infrared spectroscopy is capable of an accuracy of ±0.04 Wt. % for total THC in dried, ground hemp (10).

A mistake I find many people make is demanding more accuracy out of a calibration than is necessary. For example, for cannabis growers in states such as California where cannabis is legal, an accuracy of ±0.04 Wt. % for total THC is not needed since the state allows a ±10% relative error between your label and the actual product (11). For example, a bud labeled at 20% total THC can contain anywhere from 18% to 22% total THC and still be within the law. Because increasing accuracy is always time consuming and expensive, determine before you start calibrating what accuracy your calibration requires, and then stop work once you achieve that accuracy level.

Now that we have a validated calibration, we can begin using it to legitimately predict concentrations in unknown samples. Does this mean our job is done? By no means. Remember that because of the FAQS we must always ensure that εL is the same for the standards and the unknowns. Things change over time, and the only way to make sure you are not violating the FAQS over time is to run frequent calibration checks. A calibration check is not a full calibration and is very much like a validation. To perform a *calibration check*, make up some validation samples, measure their absorbance, predict their concentration, and compare the predicted and known values like we did above. Calculate an SEP if you used more than one calibration check sample. If things still agree within your required accuracy you can continue to use your calibration. If not, you need to immediately stop using the calibration and investigate what went wrong and how to fix it (more on this in later columns).

How often should you run a calibration check? The answer is as often as possible. Some laboratories will run a calibration check every time they use a quantitative spectroscopic calibration. This is great, but it can be time consuming and expensive. If you are using your calibration every day certainly a calibration check every week at least should be performed. If you use your calibration less often, you really should think about running a calibration check every time you use the calibration.

The fundamental assumption of quantitative spectroscopy (FAQS) was introduced, wherein we assume that the product of the pathlength and concentration, εL, is the same for standard and unknown samples. We test this assumption by running a calibration validation where a standard sample of known concentration, a validation sample, is analyzed with the calibration and the known and predicted values are compared. The difference between these two is the accuracy of your calibration. Ideally several validation samples should be run so that a standard error of prediction, the best measure of calibration accuracy, can be calculated. Whether or not your calibration is accurate enough to be used depends upon your application. To insure the FAQS is not violated over time, calibration checks should be run as often as is practicable.

**References**

- B.C. Smith,
*Cannabis Science and Technology***5**(5), 8-13 (2022). - B.C. Smith,
*Cannabis Science and Technology***5**(6), 8-11 (2022). - M.W. Giese, M.A. Lewis, L. Giese, and K.M. Smith,
*Journal of AOAC International***98**(6) (2015)1503. - https://bigsurscientific.com/.
- B.C. Smith,
*Quantitative Spectroscopy: Theory and Practice*(Elsevier, Boston, Massachusetts, 2002). - B.C. Smith,
*Cannabis Science and Technology***5**(4), 8-15 (2022). - B.C. Smith,
*Cannabis Science and Technology***5**(3), 10-14 (2022). - B.C. Smith,
*Cannabis Science and Technology***5**(2), 10-13 (2022). - B.C. Smith,
*Cannabis Science and Technology***1**(4), 12-16 (2018). - B.C. Smith,
*Cannabis Science and Technology***3**(6), 10-13 (2020). - https://cannabis.ca.gov/wp-content/uploads/sites/2/2021/10/DCC-Cannabis-Regulations-Sept.-2021.pdf.

**Brian C. Smith**, PhD, is Founder, CEO, and Chief Technical Officer of Big Sur Scientific. He is the inventor of the BSS series of patented mid-infrared based cannabis analyzers. Dr. Smith has done pioneering research and published numerous peer-reviewed papers on the application of mid-infrared spectroscopy to cannabis analysis, and sits on the editorial board of *Cannabis Science and Technology®*. He has worked as a laboratory director for a cannabis extractor, as an analytical chemist for Waters Associates and PerkinElmer, and as an analytical instrument salesperson. He has more than 30 years of experience in chemical analysis and has written three books on the subject. Dr. Smith earned his PhD on physical chemistry from Dartmouth College. Direct correspondence to: brian@bigsurscientific.com.

B. Smith, *Cannabis Science and Technology*® Vol. **5**(7), 8-11 (2022).

Advertisement

Advertisement