Skip to content

Calibration and Verification of Nanoparticle Analyzers

Lew Brown presents

…and here’s a transcript for your reading pleasure!

Good day, my name is Lew Brown from Spectradyne, and today I’d like to talk to you about calibration and verification of measurements in nanoparticle analyzers.

As part of this, I’m going to show you one of the most powerful and unique capabilities of the Spectradyne NCS 1 system: the ability to use in-measurement, positive controls to verify the measurement of your nanoparticles. So let’s get started!

Before we show you an actual example, I’d like to do a bit of background on the topic of nanoparticle analyzer calibration. Simply put, calibration is the comparison between a known measurement, the standard, and the measurement using a particular instrument. It defines the accuracy and quality of measurements recorded using a piece of equipment, and determines the traceability of that measurement.

In nanoparticle analyzers, calibration is carried out by running calibrated sized nano beads, which means that they’re traceable to some standard, usually NIST in our case, and adjusting any instrument parameters so that the standards produce the correct size results, within the manufacturer’s specification for size error.

The calibrated beads are mixed to be a designated concentration, and then the instrument or software parameters are also adjusted to produce the correct concentration results, again within the manufacturer’s specification for concentration error.

In the example shown here, 150 and 208 nanometer calibrated spheres mixed at approximately 2.5 times 109 particles per mL have been measured in the nCS1, and the results are within our specifications for size and concentration error, with no adjustments required since the measurement cartridges are factory calibrated.

All nanoparticle analyzers are calibrated in a similar method. This is a stand-alone procedure that takes place independent of any actual sample measurements.

So, in some instances, an instrument can be accurately calibrated, but not necessarily producing accurate results when measuring a real sample. For example, differences in refractive index, viscosity etc. between the particles and media in the calibration, versus the sample, may cause inaccuracies in the sample measurements themselves.

This is particularly a problem with systems based on light scattering, such as NTA and DLS. In those systems, the signal they measure, light scattering intensity, is dependent on the refractive index difference between the particles and the suspending medium. The suspending medium is typically water, or Saline water, which has a refractive index of 1.33.

So let’s take a look at the differences in refractive index for calibration particles made of polystyrene or silica, versus the difference in refractive index for typical biological particles such as extracellular vesicles (chart). It is quite apparent that the refractive index difference for biological nanoparticles is quite a bit less than that for the calibration particles.

So this begs the question, is calibration using these polystyrene or silica beads even valid for biological nanoparticles? Or for that matter, any sample where the particles have a different refractive index from material used to calibrate?

The good news for Spectradyne and our customers is that refractive indexes make no difference in measured signals in an MRPS system like the nCS1, because the measurement is electrical, and entirely independent of the particle material properties or shape.

That means, in the case of the nCS1, that the calibration is truly representative of the system performance on any particle type, which is something you cannot say for light scattering based systems like NTA and DLS.

However, the nCS1 has another unique capability that allows us to make in-measurement verification of both size and concentration as a positive control within the actual sample measurement. To do that, we merely add NIST traceable size beads at a known concentration directly into the sample. This yields an in-measurement verification that the sizing and concentration measurements of the nCS1 on a given sample are not only accurate, but also traceable.

Like I said, this is a unique capability of the nCS1, and cannot be done in light scattering systems like NTA and DLS, because the presence of the calibration beads in the sample will skew the entire results. In the nCS1, this does not happen, because each particle is measured individually with no influence from the other particles.

So let’s take a look at a quick example.

First, let’s quickly revisit the calibration data I showed previously. For this calibration run, I mixed 150 and 208 nanometer NIST traceable beads at 2.5 times 109 particles per mL.

So let’s bring that up in the Viewer. By eye, you can see that the sizing looks very good here. Remember this is using the stored default calibration for the particular mold ID used here, which we can see from the metadata was P12.4.4.

So let’s see how the system did. First, I’m going to select the distribution for the 150 nanometer beads… fit a Gaussian to it… and we can see that the peak comes out right at 150 nanometers, and that the concentration is 2.45×109 particles per mL, meaning that the concentration was within 3 percent of what we mixed it to.

Let’s do the same for the 208 nanometer beads. So, I’ll select them, and fit a Gaussian to it. And let’s make that bigger so we can see it. And you can see that the 208nm beads came out at 210nm, less than 1 percent error, with the concentration of 2.67 times 109 particles per mL, meaning that the concentration was within 7 percent of what we mixed it to.

Both the sizes and concentrations are well within the published specifications for the nCS1. It’s important to remember that in this calibration run I used the default calibration values for size and concentration that are supplied with the instrument software… Absolutely no adjustments need to be made. This is one of the key benefits of the nCS1’s microfluidic cartridge implementation: in the majority of use cases, there are no adjustments necessary to keep the instrument running in specification.

Now let’s move on to the in-measurement verification. Again, this is a unique capability of the nCS1, and not possible with other systems, especially those using light scattering techniques.

So first, I’m going to load in a measurement of an extracellular vesicle sample… right here. As is typical for EV samples, you can see a continuous distribution of sizes, in a power law format, where concentration rises exponentially as you go to lower sizes. To show that this is truly a power law distribution, I can convert to log scale just by clicking on here and replotting, and you can see that this is roughly linear when plotted on a log scale. Let’s go back to linear here. Going back to what was discussed earlier, the question is, how can I prove that what we are seeing for this EV distribution is accurate in both size and concentration?

So, for this sample, I can see that there was very little sample population out here near 150 nanometers. So I decided to spike in 150 nanometer NIST traceable beads mixed at 2 times 109 particles per mL, so the beads would show well above the background sample concentration. So, let’s take a look at the same sample with the in-measurement control spiked in. So, we’ll go ahead and load that, and we can see that the sample distribution looks quite similar to the original containing just sample.

But let’s check the beads first. I’m going to select the region where the beads are, and fit a Gaussian to it like I did before. So we’ll just select… and we’ll click on Gaussian fit…And you can see that my beads show up as having a mean of 151.4 nanometers and a concentration of 1.94 times 109 particles per mL. This means the bead spike shows less than 1 percent error for sizing versus the NIST traceable value of 150 nanometers, and my concentration shows only 2 percent error from how I mixed them.

Both of these values are well within the NC S1 instrument spec, and once again I want to emphasize that this measurement was done using the factory default calibration values, i.e. no instrument adjustments were required.

Finally, to really drive the point home, let’s overlay the original, sample-only data set to see how it looks. So to do that, I just click on “multi” here in the viewer, display the CSD, and voila, the two runs overlay perfectly with the exception of where the bead spike is on the second run.

Let’s see how close the concentration of the sample particles is between the two runs. To do that, I go in here. I’m going to select between 65 and 120 nanometers, which is where 90 pecent of the EV sample population lies, and get the concentration.

You can see that the concentration measured in the range from 65 to 120 nanometers is 1.21 times 109 particles per mL for the sample by itself, and 1.25 times 109 particles per mL for the sample plus the 150 nanometer controls in this shows only a 4 percent difference in particle concentration in the range between the two runs, which I must say is pretty impressive! Certainly it proves that the original data without the beads is verifiable in both size and concentration under identical conditions, with the sizing actually being traceable to an NIST standard.

So I hope you found this video to be helpful. The key takeaways are that

  • Calibration is critical and needs to be done for every nanoparticle analyzer.
  • You need to understand the limitations of calibration, especially for light scattering systems like DLS and NTA, and
  • The best possible verification of performance on a single sample is an in-measurement control, like we looked at.

This is a unique capability of Spectradyne’s nCS1 MRPS system and cannot be accomplished with light scattering systems.

Be sure to check out our other short videos on our website and stay tuned for new additions.

Have a great day!