Insights from industry

Development and Applications of Imaging Particle Analysis

Lew Brown, Technical Director at Fluid Imaging Technologies talks to AZoM about the development and applications of imaging particle analysis.

How has Fluid Imaging evolved to become industry leaders in producing particle size analysis instruments?

Well, it certainly has been an interesting evolution! Although we were incorporated in 1999, the company was only known in a very limited application area, oceanographic research, for its first 5 years. Our flagship product, FlowCAM®, was originally developed at Bigelow Laboratory for Ocean Sciences in Boothbay Harbor, Maine for studying plankton in ocean water.

An extremely novel concept, the FlowCAM was designed to combine the benefits of a flow cytometer and a microscope in a single instrument. Marine Biologists from around the world visited the lab and were exposed to the FlowCAM. Many of these scientists found the instrument to be so useful for their research that they expressed interest in acquiring one for their own lab. At this point, Dr. Chris Sieracki, one of the co-developers of the original FlowCAM, formed Fluid Imaging Technologies to manufacture instruments for other researchers.

During the initial 5 years of the company's history, the FlowCAM was sold exclusively to the oceanographic research community. Towards the end of this period, several non-oceanographic customers evaluated the FlowCAM for their particle analysis applications, and eventually purchased instruments. These early "industrial" customers included a flavors and fragrances manufacturer and a petrochemical company. Recognizing the potential of the FlowCAM to address these other "industrial" applications, the company expanded and began marketing to the industrial sector while continuing to sell to the core oceanographic customers. The FlowCAM also began to see use in other aquatic applications, such as limnology (the study of inland waters) at this time.

From that point forward, the growth became more rapid in both the aquatic and industrial parts of the business. We have moved twice in that period into increasingly larger facilities due to growth, and have also had tremendous growth internationally. In fact, Fluid Imaging was named the US Small Business Administration's (SBA) small business Exporter of the Year in 2008, and subsequently Exporter of the Year 2010 by the Maine International Trade Center.

FlowCAM was really the first commercially available dynamic imaging particle analysis system available in the market, and is being continually enhanced, so we have truly been an “industry leader” in this technology.

What is ‘dynamic imaging particle analysis’?

In general, “imaging particle analysis” is the use of digital images of particles to measure the particle’s size and shape information. Essentially, you can think of it as classical microscopy where the operator (viewer) has been replaced by a computer to extract the information from the images. While the actual algorithms for performing this type of analysis have been around for many years (beginning in the 1960’s), these operations require immense amounts of computing power, which was simply not available to the general researcher until fairly recently (late 1980’s).

Today, almost all microscope manufacturers offer a digital imaging option which adds a digital camera, computer and some software to the microscope package. However, these are “static” systems, in that they can only take one picture at a time of the microscope’s field of view. In order to acquire statistically significant numbers of particle images, the operator has to move (either manually or by computer control) the microscope stage after each image to view a different part of the slide. So, in a similar fashion to manual microscopy, these “static” systems are still fairly slow and operator-intensive.

“Dynamic” imaging particle analysis overcomes this limitation by having the sample containing the particles actually “stream by” the microscope optics, so that thousands of particle images per second can be captured. In order to “freeze” the moving particles in space, some sort of strobed illumination source is combined synchronously with a very short shutter speed in the camera. As each frame of the field of view is captured, the software, in real time, extracts the particle images from the background and stores them.

Concurrently, a row in a spreadsheet for each particle image is populated with both spatial and gray-scale measurements for each particle. In FlowCAM, we store over 30 different measurements for each particle at rate of up to 50,000 particles/minute, with no operator intervention required. This means that we can quickly gather statistically significant populations of particle data.

How does this method compare to traditional techniques for the analysis of microscopic particles in fluids?

Well, as previously discussed, manual microscopy has been around for particle analysis since the 1600’s, and remains to this day as one of the most important techniques. However, it has always been limited to statistically insignificant quantities of particles due to the time required to prepare the sample and analyze by eye.

To overcome the limitations of microscopy, techniques for rapid analysis of microscopic particles began being developed, with the earliest example being the patenting of the Coulter Counter in 1953 (still very widely used today). Coulter Counters are what I refer to as “volumetric-based” particle analyzers: they do not directly measure particle size, rather they measure a signal that is proportional to the particle volume. In order to convert the signal into volume, the system must be calibrated by passing spheres of known diameter (volume) through the system and measuring their signal. Once the volume is known, an assumption is made that all particles are spherical in shape, and a single number measurement is made for each particle by converting the volume into an Equivalent Spherical Diameter (ESD).

Almost all of the most common automated particle analysis systems out there are “volumetric-based” like the Coulter Counter, including techniques such as laser diffraction, light obscuration and light scattering. All of these techniques are “indirect” measurements: they do not measure physical dimensions. Rather, they measure some signal that can be shown to be proportional to a physical dimension, in this case volume. The volume is then converted to an ESD by assuming that all particles are spherical in shape. These volumetric systems are able to process statistically significant numbers of particles in a short period of time and then present a Particle Size Distribution (PSD) showing particle size versus either frequency or volume.

In an “imaging-based” system, particle measurements are made directly from an image of the particle. Since the optics of the system are fixed, distance measurements on the image (“image space”) are directly converted to real distance measurements on the object (“object space”) by knowing the overall system magnification. Imaging systems do not have to make any assumptions about a particle’s shape because they measure directly from the image. In fact, this means that multiple measurements can be made on each particle to yield more information about the particle, such as length, width, circularity and other morphological shape descriptors. In addition, gray-scale measurements can also be made such as intensity, transparency and color information (when a color camera is used).

So, the primary differences between “volumetric-based” and “imaging-based” particle analysis systems is that volumetric systems make only a single, indirect measurement, while imaging systems can measure multiple, direct measurements. Last, but certainly not least, an imaging system gives you a concrete record of each particle in the form of its image, which can be viewed directly by a human to better understand the data (and to insure that the data is being properly interpreted). On the other hand, volumetric systems are truly “black boxes” which produce distributions with no sort of “audit trail”, and no way of knowing if the answers are a correct interpretation without using microscopy to check the results.

What features does digital imaging bring to automated analysis of particles?

If you look up the word “analysis” in the dictionary, you will find that all the primary definitions talk about the “separation of ingredients in a mixture”. Since volumetric systems must assume all particles to be spherical and only measure ESD, they cannot tell you anything about what a sample is composed of. So, in reality, most particle analysis systems really cannot do “analysis” in the strict sense of the word, they are merely particle “sizers” and/or “counters”.

So, while they will output a graph showing particle size distribution, they cannot give you any information about “what’s under the curve” (i.e. the components that make up the sample). For a homogeneous sample, this is fine; but in the real world many of the samples being analysed are heterogeneous, composed of many different particle types. Even in a homogeneous sample, one may be interested in knowing if there are contaminants or impurities present, which cannot be found with a volumetric system.

So, the key benefit of imaging is that we can make multiple (over 30 in FlowCAM) measurements for each particle, which yields particle shape information that can be used to distinguish different particle types in a mixture. One of the simplest examples of this would be from our origins, identification of different species of plankton in ocean water. A volumetric system would only be able to tell you a size distribution based on ESD for the sample, whereas FlowCAM uses software filters to separate different species in the sample into their respective groups based on particle shape information. So, the imaging system can do true “analysis” of the sample, i.e. breaking it into its component parts, while volumetric systems cannot.

Sometimes people ask, “why so many measurements?” when we talk about having over 30 of them. It’s a very valid question! The reason is that more measurements give us finer discrimination between particle types. In the plankton example, many times we will see two different species that have the same general shape, but may be distinguished by more advanced measurements like “circularity”, or perhaps by gray scale attributes (such as one being opaque and the other transparent).

In some applications, the particles are in fact basically spherical in shape and homogeneous, where one would think that a volumetric approach might be sufficient. However, in many of these applications, it is desirable to have particles as “spherical” as possible for product performance reasons.

Examples of this might be laser toner particles, proppants for oil and gas wells, or HPLC column packing material. In all of these cases, the products perform best when the particles are most spherical in shape. FlowCAM measures three different “circularity” properties which are directly related to how spherical the particles are. These measurements can be used to “grade” products based on a distribution of circularity measurements, and in some cases are used in QA/QC to decide on acceptability of product by either the manufacturer or buyer.

One of the most technically challenging applications FlowCAM is being used for is the characterization of sub-visible particulates in biologics (biopharmaceuticals). The “incumbent” technique in this application is light obscuration, which is volumetric. However, much of the particulates in these samples are protein aggregates, which are transparent in nature.

Light obscuration does not work well with transparent particles since light passes through them relatively unchanged. This means that the light obscuration system will, best case, mischaracterize the particle size and, worst case, not even detect the protein aggregates. These aggregates are also very amorphous in shape, ranging from spherical clumps to single fibrous strands, so ESD is generally not a good descriptor of size. Imaging systems are able to detect very subtle differences in transparency, and are therefore able to detect and properly characterize these protein aggregates both in size and shape.

Finally, if these biologics are packaged in disposable syringes, much of the particulate matter detected will be silicone droplets that are used to lubricate the syringe. These particulates are expected and not harmful, so should not be counted in a regulatory environment against allowable amounts of sub-visible particulates. Any volumetric approach cannot tell the difference between the silicone droplets and other particulates. FlowCAM, however, can quickly remove the silicone droplets from the particle counts by using a software filter that looks for these perfectly spherical, semi-opaque particles.

Benchtop FlowCAM®.

Benchtop FlowCAM®

What data is presented by using your particle size analysis instruments and how does the end-user manage this information?

One of Fluid Imaging’s key strengths and differentiators is our software, called VisualSpreadsheet®. Almost all particle analysis operations are spreadsheet based. For instance, similar to a sieve analysis, one may want to divide the particle size distribution (PSD) based upon different size ranges of particles. So, one simply does a filter on the spreadsheet to find all particles in a given size (ESD) range, and the software returns tabular statistics for that filter (number, mean, standard deviation, etc.).

The difference in a VisualSpreadsheet is that, as the name implies, we also present the results visually, as the particle images themselves. So, when you run the filter to look for specific particle types based on size, shape, gray-scale or any combination of these, the results are instantly displayed as a collage of images that matched the filter parameters, along with the summary statistics. The software also allows the user to sort the particle images based on any of the measured parameters. So, for instance, one can sort on ESD to instantly look at the smallest or largest particle images.

The human brain is the most powerful computer available; our ability to look at something and instantly better understand the information and act on it is something well beyond what any computer can do. If “a picture is worth a thousand words”, then I would imagine that thousands of pictures would be worth a whole lot more! The ability we give our users to interact with the actual particle images through VisualSpreadsheet is truly an enabling technology. One can interactively, via sorting and filtering, figure out the best parameter or set of parameters by which to segment out particular particle types.

Viewing the particle images allows you to instantly see what is going on in a process, rather than simply relying on numbers. For instance, one can immediately see if the largest particles in the distribution are aggregates, or just large single particles. Which one they are tells you dramatically different things about what is going on in the process!

We take it one step further, though, by adding in statistical pattern recognition. In a VisualSpreadsheet “statistical filter”, the operator identifies several example particles of the type they are interested in finding by merely clicking on them, and invokes “fine particles like selected”. This initiates a statistical pattern recognition that automatically figures out the best combination of parameters to find similar particles, and then ranks them in order of their statistical similarity to the original particles selected.

The beauty of this is that it does not require the operator to make any judgement on how best to isolate those particles (i.e. what measurements to filter by, etc). It is a purely statistical approach and, like all other filters in VisualSpreadsheet, they can be stored to apply to any data set. In the case of the statistical filter, that would mean that the exact same statistical evaluation is made on each different sample, eliminating any operator bias.

Of course, the software also represents the data in all the classical ways like histograms, scatterplots, summary statistics, etc. We allow custom graphs to be created for any of the parameters measured, as well as overlaying multiple runs on the same graphs for comparison purposes.

We can also export any of the data, statistics, graphs, etc. to other systems such as LIMS.  But the real power lies in the interactivity with the images themselves. As I stated previously, having the images stored for posterity also gives an ironclad audit trail. If a customer were ever to question a supplier’s statement of material composition, they can show the customer the actual images that the characterization was made from!

Can you describe the differences between the FlowCAM VS-Series and the PV-Series?

The VS-Series is our flagship R&D product from which all other FlowCAM derivatives come from. I say it is an “R&D product” in that it is extremely flexible, and allows for multiple different configurations which are upgradeable from a base unit. The major flexibility is in that it can accept four different magnification configurations in the same instrument, giving it a potential overall range from 2µm to 2mm. So, for an R&D environment, where many different sample types or size ranges might be present, this flexibility means that a single instrument can handle multiple size ranges.

It is important to note that this range cannot be covered all at once with a single configuration, and the maximum/minimum size is really application-dependant based on the measurements needed.

The VS-Series also has available optional fluorescence measurement and triggering. A FlowCAM can be equipped with either a green (532nm) or blue (488nm) laser with two different fluorescence measurement channels (stock and custom fluorescence bands available). One of the common uses for the fluorescence capability is with water samples, where fluorescence is used as an asynchronous trigger for capturing images.

In a water sample, the researcher is typically only interested in the living algae present, which may be quite sparse in concentration. The algae will fluoresce when struck by the laser, and that fluorescence signal is then used to trigger the camera. This insures that none of the algae particles are missed, and reduces the number of extraneous, non-living particles that might be captured if synchronous acquisition was used.

In contrast, our PV-Series product is mean to be used in more of a “production” type of environment. The PV-Series has only one magnification in each unit (two different magnification configurations are available), so it is restricted to a single size range of particles per instrument. In many applications we see, this is all the customer needs. For example, a manufacturer produces a product that is always within the 5µm to 300µm size range, so they will use a PV-40 instrument which perfectly covers that range. It is really meant for applications where the same material will be measured over and over again, like in a QC environment.

What we have seen in many cases, particularly in process analysis, is that the process R&D department may use a VS-Series to investigate many different process variables. Their research may uncover some specific particle properties that are important to monitor during production in order to assure quality. Once those have properties and the methodology (filtering, etc.) have been defined, they may wish to deploy a production solution using the PV-Series as the platform to deliver it, at a lower cost and smaller footprint. So, many of our customers end up with both products.

Benchtop FlowCAM® in use.

Benchtop FlowCAM® in use

What are the primary applications for these products?

Well, we still do quite a large portion of our business in the original water markets like oceanographic research, drinking water analysis, etc. That’s what FlowCAM was originally designed for, so it makes sense that it still is a big part of our business. We are also seeing algae analysis move into more “industrial” applications, such as algae biofuel, nutraceuticals, and even algae-based plastics.

In all honesty, though, FlowCAM doesn’t really care what the particles are… “a particle is a particle”! This is a bit of a “double-edged sword” for us as a company, however: it means that the system can address almost any particle analysis problem (which is good), but that can make it hard to concentrate for a small company! So we have been very careful about selecting which markets we choose to enter. For example, we have made a concerted effort to stay out of the medical/cell analysis applications because there are already some large players there with very specific, vertical solutions available.

The biologics applications I mentioned earlier have been a very large market for us, which continues to grow. We have had very good success in other materials characterization applications such as superabrasives (micronized diamonds), chromatography column packing material characterization, yeast characterization, fibers analysis, foods/beverages and microencapsulated products. We are also doing quite a bit of business now in the petroleum sector, with applications like oil in water analysis and drilling fluid characterization.

If you were to ask our lab manager what he has run through a FlowCAM, he would hand you back a list of several hundred different materials! Some of the more peculiar things we have analysed in FlowCAM would be: frozen alfredo sauce, explosives, chocolates, shampoos, antacids, dishwasher rinse water, and fibers of all kinds! One of the things that makes working with FlowCAM so much fun is that we never know what we are going to be looking at next!

How do you see dynamic imaging particle analysis developing in future?

I see the technology being used in an ever-increasing fashion. In all honesty, it has taken longer to really “catch on” within the particle analysis industry than what I originally expected when I started at Fluid Imaging almost ten years ago. A lot of this seems to be just general inertia: people are used to the volumetric techniques, and have been using them for so long that they are resistant to change. In an established process, integrating new forms of analytics can be very time consuming and can be seen as a risk, so it has to be a very deliberate, well-mapped out process. The other thing is that the sheer volume of data that imaging particle analysis outputs can be a bit disconcerting at first! But that’s where the software really helps.

Because dynamic imaging particle analysis is relatively new compared to the established volumetric techniques, as an industry leader we find that we have to do a lot of education in the marketplace. But when people actually see their materials being imaged at high rates, and then see what they can do with that data, we can see the light bulb go off in their heads! It is truly an enabling technology.

From a technological standpoint, an area we are spending a lot of effort in is in finding ways to better segregate subtly different particles, i.e. better image recognition. This means a lot of different things, from getting better quality images, to more particle measurements to help distinguish things, to new algorithms for pattern recognition. In the end, if the true object of “analysis” is the “separation of ingredients in a mixture”, then anything we can do to perform better, more accurate separation of particle types is beneficial.

I also think you will be seeing “other measurements” being added beyond just the images to aid in further particle differentiation. Here, developments in “spectral analysis” as opposed to “spatial analysis” of the particle may prove to be useful. This could mean anything from more channels of fluorescence similar to what is found in flow cytometers, to something like hyperspectral imaging, where spectral band comparisons could aid in identification. We have already seen some “combination instruments” at the very high end using multiple analysis techniques in the same instrument. This type of thing should become more prominent as costs continue to be driven down. It is truly an exciting time to be in the dynamic imaging particle analysis business!

About Lew Brown

Lew Brown

Lew Brown holds a Bachelor of Science Degree in Imaging Science from Rochester Institute of Technology in Rochester, NY, and has over 25 years of experience in the digital imaging and computer graphics industries. Currently he is the Technical Director for Fluid Imaging Technologies, Inc. where he is responsible for all technical marketing activities related to the application and use of Fluid Imaging’s imaging-based FlowCAM© particle analysis system.

 

 

Disclaimer: The views expressed here are those of the interviewee and do not necessarily represent the views of AZoM.com Limited (T/A) AZoNetwork, the owner and operator of this website. This disclaimer forms part of the Terms and Conditions of use of this website.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Yokogawa Fluid Imaging Technologies, Inc.. (2024, July 30). Development and Applications of Imaging Particle Analysis. AZoM. Retrieved on November 22, 2024 from https://www.azom.com/article.aspx?ArticleID=10687.

  • MLA

    Yokogawa Fluid Imaging Technologies, Inc.. "Development and Applications of Imaging Particle Analysis". AZoM. 22 November 2024. <https://www.azom.com/article.aspx?ArticleID=10687>.

  • Chicago

    Yokogawa Fluid Imaging Technologies, Inc.. "Development and Applications of Imaging Particle Analysis". AZoM. https://www.azom.com/article.aspx?ArticleID=10687. (accessed November 22, 2024).

  • Harvard

    Yokogawa Fluid Imaging Technologies, Inc.. 2024. Development and Applications of Imaging Particle Analysis. AZoM, viewed 22 November 2024, https://www.azom.com/article.aspx?ArticleID=10687.

Ask A Question

Do you have a question you'd like to ask regarding this article?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.