Source: Laboratory of Dr. B. Jill Venton - University of Virginia
The goal of many chemical analyses is a quantitative analysis, where the amount of a substance in a sample is determined. In order to accurately calculate the concentration of an unknown from a sample, careful sample preparation is key. Every time a sample is handled or transferred, some of the sample can be lost. There are strategies however, for minimizing sample loss. There are also strategies for coping with sample loss and still making accurate measurements of concentration.
To minimize sample loss, the ideal is to minimize the number of sample handling and transfer steps. For example, massing a solid sample directly into a flask that a solution will be made in reduces a transfer step. If it's necessary to transfer from one flask to another and a dilution is being made, then triple rinsing the glassware helps ensure all the sample is transferred. Other strategies are more specific to the sample. For example, samples that adsorb to glass, such as proteins, might better be handled in polypropylene disposable tubes. The tubes are not hydrophilic, so if a small amount of sample is to be pipetted in water, it is best to have already added the water to the tube, so the sample can be pipetted directly into the solvent. It may be better to concentrate, rather than completely dry a sample, due to losses from insolubilities after rehydration.
Another source of sample loss is through incomplete sample manipulations. For example, if a derivatization procedure is used and the derivatization is incomplete, then the full amount of sample is not observed. Errors such as this are systematic errors, and can be solved by correcting the problem, such as changing the derivatization procedure. Another cause of systematic error in measurements is matrix effects. These can interfere with measurement of certain substances and performing calibrations in the same matrix as the sample can reduce this effect.
Quantitative analysis is typically carried out using either external or internal standards. For external standards, a calibration curve is made by measuring different known concentrations of the analyte of interest. Then, the sample is run separately from the standard. For internal standards, the standard is in the same sample as the analyte of interest, allowing the measurement to be taken simultaneously. Typically, a different species is added called the internal standard and the ratio of the response for that internal standard and the analyte is calculated. The idea is that the ratio of the response, called the response factor, is proportional to their concentration. While the method must be able to distinguish between the analyte of interest and the internal standard, any sample losses that occur after the internal standard is added should be similar for both substances and thus the ratio of the response stays the same. A special case of using internal standards is the method of standard additions, where increasing amounts of the analyte is added to the solution and the original amount of analyte is back-calculated. Internal standards can be used in chromatography, electrochemistry, and spectroscopy.
An internal standard is a substance added in a constant amount to a sample, blank, and standard in an analysis. An internal standard can compensate for both systematic and random errors. For example, if there are instrument fluctuations that cause random errors in the measurement, these fluctuations are expected to be the same for both the internal standard and the analyte and thus the ratio of signals does not change. For systematic errors, such as matrix effects of the solvent, as long as the effect is equal for both the standard and the analyte, the ratio is again unaffected.
The disadvantage of internal standards is that it is hard to find a suitable internal standard. The internal standard must have a signal that is similar to the analyte, but different enough that it can be distinguished by the instrument. Also, the internal standard should not be present in the sample matrix, so that the added standard is the only source of the standard. Occasionally, a major constituent of a sample that is constant in concentration can be used as the internal standard instead of an added standard, but the concentration must be well known for that constituent. The internal standard should also not suppress or enhance the signal of the analyte.
In chromatography, one of the largest sources of error is often the injection. If manual injections are used, there can be errors in loading the syringe properly. Volumes are typically small (~1 µL) so there are uncertainties in reproducibly injecting this small a volume, often a couple of percent relative standard deviation (RSD). In gas chromatography (GC), the sample is injected into a heated port and evaporation from the needle tip can result in variations in volume injected. Auto-samplers help with both the error in loading the syringe and the error in injecting quickly to avoid evaporation in GC, but the error can still be 1–2% RSD. With chromatography, peak area is generally used instead of peak height, as the peaks get wider and shorter with time but the peak area is constant. Thus, for internal standards, ratios of peak areas are used in chromatography instead of peak heights.
For a calibration with an internal standard, a response factor is calculated. The response factor (R) is the ratio of the peaks compared to the ratio of the concentrations where X is the analyte and IS is the internal standard.
For chromatography area (A) is used. The response factor can be calculated from a calibration plot of AX/AIS vs CX/CIS, where the response factor is the slope and the y-intercept is assumed to be 0. Once the response factor is known for standards, then the response of the unknown can be calculated from the measured area ratio from the experiment.
1. Proper Sample Handling: Making a Solution
- Take a clean beaker and mass the correct amount of sample into it. Record the actual mass used. In this example, a solution of adenine is made in a volumetric flask for use as an internal standard for the next analysis. The mass of adenine is 100 mg. Do not directly mass into a volumetric flask because it has a long neck and the adenine cannot be easily added or removed.
- Add roughly 25 mL of solvent (in this case dimethyl sulfoxide (DMSO)) to the beaker and let it stir to dissolve. In this example, the final solution is made in a 50-mL volumetric flask, so only add about 25 mL so the beaker can be rinsed and the solution made up to the final volume.
- Once the solid has dissolved, pour the solution into the volumetric flask.
- Rinse the beaker and the stir bar with small amounts of solvent, about 10 mL, and pour the rinse into the volumetric flask. Repeat twice more. This helps ensure proper solution transfer.
2. Preparation of an Internal Standard Calibration Curve
- Prepare the desired standard samples for gas chromatography analysis. In this example, caffeine is extracted from coffee using acetonitrile and then adenine is used as an internal standard for measurement.
- For the caffeine samples, weigh out the amount of sample needed to make 1 mg/mL sample. If using a 10-mL volumetric flask, that is 10 mg.
- Weigh the analyte into a beaker, add a few mL of solvent (here methanol) to dissolve, then quantitatively transfer to the volumetric flask using 3 rinses.
- Make 3 more standards in a similar manner with 0.2, 0.5, and 2 mg/mL caffeine.
- Put 1 mL of each caffeine standard into a sample vial.
- Add 0.2 mL of 2 mg/mL adenine internal standard to each sample vial.
- Run the gas chromatography experiment with each caffeine standard. For each chromatogram, calculate the ratio of peak areas for the caffeine vs the standard.
- Make a plot of the area ratio vs the concentration ratio. The slope of that plot is the response factor.
3. Preparation of a Real Sample with Internal Standard for Gas Chromatography
- Prepare the desired sample for gas chromatography analysis. In this example, caffeine is extracted from coffee using acetrontrile.
- For the coffee sample, mass out 2 g of coffee into a 100-mL beaker. Record the exact weight of the coffee.
- Add 20 mL of acetonitrile to the beaker.
- Allow it to sit for 20 min, stirring frequently.
- Filter the coffee grounds out using a filter paper in a funnel.
- Rinse the filter paper 3x with small amounts of acetonitrile (5 mL).
- Measure the final volume of the filtrate. It should be about 35 mL.
4. Run the Sample and Calculate the Concentration
- Take 1 mL of the coffee extract sample and add 0.2 mL of the internal standard in a vial. Place the vial into auto-sampler rack.
- Run a GC analysis of the sample. Make sure that the GC conditions are such that the caffeine and adenine separate. In this example, isothermal separations are performed at 200 °C.
- After the analysis, compute the peak area for both the internal standard peak and the analyte peak. Using their ratios, calculate the amount of caffeine in the sample.
5. Results: GC Analysis of Caffeine with Internal Standard
- GC analysis of caffeine is shown in Figure 1. Adenine is used as an internal standard. The ratio of peak areas can be measured and plotted vs the ratio of the concentrations. The slope of the plot is the response factor (in this case 1.8).
- Figure 2 shows a chromatogram of a coffee sample with adenine internal standard. The ratio of the peak area is 1.78. Using the response factor and the known concentration of adenine (0.33 mg/mL), the concentration of caffeine in the unknown sample is calculated to be 0.33 mg/mL.
Figure 1. Calibration plot using an internal standard. A plot of the area ratios vs concentration ratios for 3 standard samples of caffeine (1, 0.5, and 0.2 mg/mL) with 0.33 mg/mL adenine internal standard added to each. The slope of the line is 1.8, which is the response factor.
Figure 2. Chromatogram of coffee with adenine internal standard. A plot of the response of the FID detector to samples. The three main peaks are adenine (IS), caffeine, and palmitic acid.
Sample loss can occur every time a sample is handled or transferred, thereby making accurate calculations of concentration difficult.
To ensure accuracy, the effects of sample loss must be minimized using careful sample preparation and by limiting the number of sample handling and transfer steps. However, sample loss can also occur due to systematic errors, such as incomplete sample manipulation, matrix effects, and variations in analytic procedure.
These sources of loss can be accounted for by adding a known concentration of a species similar, but not identical, to the compound of interest. This is called an internal standard. Any sample losses that occur to the internal standard should be similar for the analyte, allowing for the concentration to be accurately calculated.
This video will illustrate the use of an internal standard and proper lab technique to account for sample loss when determining the concentration of an unknown.
An internal standard is a substance added in a known amount to standards, samples, and blanks during an analysis.
In chromatography and spectroscopy, the ratio of the signal for the internal standard and the analyte is calculated. This ratio, called the response factor, is proportional to the ratio of the analyte and standard concentrations.
Response factor, R, can be expressed by the following equation, where A represents the analytical signals of the sample and internal standard and C represents the concentrations of the sample and internal standard.
An internal standard can compensate for both systematic and random errors. For example, random errors—such as inconsistencies when measuring a sample—will be the same for both the internal standard and the analyte. Therefore, the ratio of their signals will not change.
For systematic errors, such as matrix effects in solution, the ratio will be unaffected as long as the matrix effect is equal for both the standard and the analyte.
While internal standards provide great benefit, it can be difficult to choose one that is suitable. An internal standard must have a signal that is similar, but not identical, to the analyte. It also cannot affect the measurement of the analyte in any way.
Finally, the concentration must be well known. This is achieved by ensuring that the internal standard is not natively present in the sample; thus, the only source of it in solution is the known concentration added.
In the following experiment, the concentration of caffeine in an unknown sample will be determined by gas chromatography.
This is achieved by creating a calibration curve using known caffeine solutions, with adenine as the internal standard. The slope of the calibration curve is equal to the response factor.
Once the response factor is known, the concentration of the unknown can be calculated from its measured chromatogram area ratio.
Now that you understand the basics of internal standards, let's take a look at the procedure.
To begin the procedure, accurately weigh 100 mg of the internal standard, adenine, into a clean beaker.
Next, dissolve it in roughly 20 mL of dimethyl sulfoxide, and mix the solution.
Once the adenine has dissolved, pour the solution into a 50-mL volumetric flask.
Rinse the beaker and stir bar with 10 mL of DMSO, and pour the rinse into the flask. Repeat this rinse twice, to ensure proper solution transfer. Fill to the calibration mark, resulting in an internal standard with a concentration of 2 mg/mL.
Next, weigh 100 mg of caffeine into a beaker to prepare a stock solution. Dissolve the caffeine with a small amount of methanol. Then, use 3 rinses to transfer this solution to a fresh 25 mL volumetric flask. This is the 4 mg/mL stock solution. Use it to create 3 caffeine standards.
Next, add 0.2 mL of the internal standard, adenine, to each flask. Fill each to the final volume with methanol. Transfer each solution to a sample vial.
Run each caffeine standard through a gas chromatograph. Calculate the ratio of peak areas for the caffeine versus the adenine standard.
First, weigh 2 g of coffee into a 100-mL beaker, and record the weight.
Next, add 20 mL of methanol to extract the caffeine from the coffee. Allow the solution to stir for 20 min.
Using a Büchner funnel, filter out the coffee grounds. Rinse the beaker with a small amount of methanol, and pour this rinse into the funnel. Repeat the rinse twice.
Measure the final volume of the filtrate; it should be approximately 35 mL.
To prepare the sample for analysis, add 1 mL of the coffee extract to a sample vial. Then, add 0.2 mL of the adenine internal standard, and place the vial into the instrument's auto-sampler rack.
Run a gas chromatography analysis of the sample, ensuring that the conditions are such that the caffeine and adenine are separate.
After completing the analysis, compute the peak area for both the internal standard and the analyte.
Once all the samples have been analyzed, the standard calibration curve can be determined for the caffeine/adenine solutions by plotting the ratios of the peak areas versus the ratios of the concentrations. The slope of this line, which represents the response factor, was 1.8.
Next, the GC data from the extracted coffee sample is analyzed. The ratio of the peak areas was calculated to be 1.78. Using the response factor and the known concentration of the internal standard, adenine, the concentration of caffeine in the unknown sample was calculated to be 0.33 mg/mL.
Many different types of reactions, across various scientific disciples, utilize internal standards to minimize the effects of errors and sample loss.
The effects of sample loss encountered during sample preparation can be minimized using internal standards, keeping their concentration ratio nearly constant.
In this example, bioactive lipids were extracted from lysed cells using a liquid-liquid extraction process. Stable isotope internal standards were added at the beginning of extraction to account for errors during sample preparation.
Internal standards were not only critical for the preparation of the bioactive lipids, but also for the analysis. The lipids were separated using high-performance liquid chromatography, and analyzed via mass spectrometry.
In spectroscopy, internal standards can help correct for random errors due to changes in light source intensity. If a lamp or other light source has variable power, it will affect the absorption and consequently, emission of a sample. However, the ratio of an internal standard to analyte will stay constant, even if the light source does not.
In chromatography, one of the largest sources of error is the injection. Auto-samplers help minimize this, but error can still be 1–2% relative standard deviation.
In this example, vapor standards containing an internal standard were analyzed using gas chromatography to establish a calibration curve. Once this was complete, the unknown sample could then be measured and the losses due to volatility of the sample accounted for.
You've just watched JoVE's introduction to internal standards. You should now understand best practices for minimizing sample loss, internal standards, and response factors.
Thanks for watching!
Applications and Summary
Internal standards are used in many fields, including spectroscopy and chromatography. In spectroscopy, internal standards can help correct for random errors due to changes in light source intensity. If a lamp or other light source has variable power, it will affect the absorption and consequently, emission of a sample. However, the ratio of an internal standard to analyte will stay constant, even if the light source does not. One example of this is using lithium (Li) as an internal standard for the analysis of sodium in a blood sample by flame spectroscopy. Li is chemically similar to sodium but is not natively found in blood.
For chromatography, internal standards are often used in both gas chromatography and liquid chromatography. For applications with mass spectrometry as the detector, the internal standard can be an isotopically-labeled analyte, so that the molecular weight (MW) will be different than the analyte of interest. Internal standards are commonly used in pharmaceutical or environmental analyses.