Vous êtes sur la page 1sur 4

TITLE: Novel Approach to Recognize Light Scattering Patterns from Holographic Video

Microscopy Using Neural Networks

Proposal Summary / Abstract: Holographic video microscopy yields beautiful


light scattering patterns of colloids that can be analyzed to extrapolate useful
information about their position, radius and refractive index. The first step in this
analysis is to discriminate between colloid light scattering patterns and the background
noise, so they can go on to be analyzed by the Lorenz Mie theory. The goal of this
project is to reduce this detection time by an order of magnitude using neural networks,
so that holographic video analysis can be run in real time rather than after an
experiment. This method will be compared to the old and slow linear algebra method
previously used, to show the accuracy and speed of neural networks.

Background / Introduction: Colloids are all around us in our everyday lives,


and some examples of these substances are the milk that babies drink, the blood that
keeps you alive, and the insulin that can be the difference between life and death for
diabetics. Colloids are homogeneous mixtures of nanometer to micrometer particles
suspended in a solvent, classically water. The methods of studying these vital fluids are
relegated to the domain of soft matter physics, and this field has been working towards
understanding the fundamental forces behind the interactions of soft matter, to solve
important problems such as insulin protein aggregation. The bottlenecks in research
have primarily been around imaging and extracting information from these substances,
and the leading methodology is currently holographic video microscopy coupled with the
Lorenz Mie theory of light scattering. Holographic video microscopy has been a versatile
and accurate method for obtaining light scattering patterns from colloidal substances.
The premise of the theory has been to shine a laser over particles between 200
nanometers and 10 micrometers, and collect the light scattered from the particle on a
video camera behind the sample. To find a particle in the scattering pattern, a computer
used a linear algebra operation to collate relevant light patterns to an ideal hologram,
and this fit was then operated on by the Lorenz Mie theory of spherical light scattering to
extrapolate radius, refractive index, and the x, y, and z coordinates. In the last 10 years,
the operation of finding the light scattering feature has been extremely time consuming,
taking around 100 ms per feature. To operate a microscopy setup at 30 fps brings the
amount of time needed to analyze a frame to less than 33 ms for real time, and that is
assuming only one feature per frame (the current rate is around 150ms per frame). The
focus of this project is to work towards bringing this time per frame down under the 33
ms mark through the use of trained neural networks, making real time analysis a reality.
Project Description (contextualization and significance of project): The
approach that is being taken in the Grier lab is to significantly reduce the time spent on
finding the features by implementing neural networks trained on sets of experimental
data. This method involves spending large amounts of time (4-48 hours) training
networks to recognize light scattering patterns, but the time for detection per feature is
between 10-20 percent of the collusion (old) method. These methods have never been
used for finding features in soft matter systems, and are novel even in other pursuits,
such as the automotive industry. Recently, companies like Google and Tesla have been
experimenting with neural networks to detect objects for self driving cars. We at the
Grier lab hope that the technology entrusted with peoples lives can have positive
applications in soft matter labs. A neural network is a type of semi artificial intelligence
as it does not use procedural computer logic to come up with an answer. The specific
type of object classifying network being used is a Haar Cascade detector, akin to the
Viola-Jones patented network that Apple uses to detect faces on the iphone. It works by
feeding the training algorithm thousands of positive and negative samples, taken
experimentally from prior holographic microscopy video feeds. The positive images
consist of a single colloidal light scattering pattern, centered in the image, and a couple
pixels of background surrounding it. The light scattering patterns contain a single bright
light center and successive rings of luminosity, fading as the distance from the center
increases. The positive images capture the center and one to two of these rings. The
negative images consist of experimental backgrounds, i.e. what is found on a
holographic video microscopy feed when there are no colloids in the objective lense, but
rather just the suspension fluid. The Haar-Cascade program then knows that there are
colloids in one set of images and no colloids in another, and from this info we create
samples, which consist of a positive image superimposed on a negative image.
Because of this feature of the Haar Cascade trainer, the positive image are to of the
size of the negative images. Once the samples are created, the trainer knows where the
colloid is in a sample and also still preserved the negative images without any relevant
colloidal light scattering in them. The cascade then uses Haar wavelets to try to
recognize patterns in the light scattering that are nonexistent in the negative images.
This process works by moving a window the size of the positive image across a sample,
and within this window places down Haar wavelets, which could be a simple rectangle
with black on one side and white on the other. If the wavelet discriminates between the
positive and negative images at a better than random rate (set in the parameters for the
cascade), this will become a relevant feature and thus the cascade will use it in the final
version as one node in the cascade. The cascade is complete when it has a sufficient
number of nodes to properly discriminate negative space from positive space at a rate
also set in the parameters, usually 99.9% accuracy. This method is using many weak
nodes combined to make a strong classifier. Because a computer can quickly run these
nodes over a region of non-interest and then move on when a low percentage get
activated, the cascade spends lots of time on relevant spaces where there is a potential
colloid and almost no time on the negative space where there is probably no colloids.
This is a massive time advantage over the previous method, which had no means of
initial discrimination. Once a classifier is finished, it can quickly sweep a picture and
return back the potential locations of many colloids in one frame. The shortened time
and black box approach (the concept of not knowing what is going on in a particular
system) leads to some false positives, and to combat this a standard deviation test is
conducted on every region that the classifier says a colloid is in, and the region only
passes if it reaches a certain arbitrary threshold, determined experimentally for the
specific cascade. Once the regions with a colloid light scattering pattern are found, the
center of the region is taken by the computer and fed into the Lorenz Mie fitter
algorithm, which returns the radius, refractive index, and three dimensional particle
position. This method has effectively cut out the costly linear algebra feature finder, and
replaced it with a fast and accurate method for finding features in light scattering
patterns.

Research Strategy: Going forward, the Grier lab hopes to train more cascades
and narrow in on the different parameters and training options. The Haar cascade
method is not a logical procedure that follows typical computer protocol, but rather
mimics the human brains ability to discern between objects. As these areas are on the
forefront of computer science, there is no good documentation on the method or what
the different parameters do. To add to the general knowledge of the scientific
community, we plan on writing a guide to using Haar cascade in soft matter research
labs, for light scattering patterns are unlike any other use for neural networks. This goal
will, in tandem, lead to the development of an extremely strong network for our own
purposes, and this will be utilized in the Grier lab as soon as the results are in. To train
these cascades, the two things needed are large databanks of experimental images and
also computers with powerful central processing units and graphical processing units.
Thankfully, there is a database of experiments that the Grier lab has done with
holographic video microscopy in the last couple of years, as well as an abundance of
computational power on the NYU servers and on my home computer. This means that
we are cranking out as many as five new networks a week and are quickly honing our
understanding of the parameters and training protocols. To frame the holographic video
microscopy method would require an entire paper, but a brief overview is called for
using a picture taken from a recent publication of the Grier lab: The Holograms shown in
this image are (top) a raw image and (bottom) a normalized image. The images used for
the training of the neural networks are very similar: Positive: Negative: We are going to
put together a massive repository on github of all the cascades, python scripts, and
instructions, and from these data we are currently starting to write a paper. This is an
unusual situation because the project is almost over and I still have a year left in the
program, so the main focus going forward will be to publish and then learn about the
next step of the process, the Lorenz Mie theory of scattered light. This will in many ways
be more difficult than the current project, because math, but the knowledge I will gain
will hopefully help me understand how to cut down the time spent in this stage of
analysis. The sub 30ms goal is not unattainable and will be almost achieved using the
Haar cascade method, and I am hopeful for my future working at the Grier lab.

Bibliography:
A. Yevick, M. Hannel and D. G. Grier. Machine-learning approach to holographic
particle characterization. Optics Express 22, 26884-26890 (2014). (preprint)
B. J. Krishnatreya and D. G. Grier. Fast feature identification for holographic
tracking: The orientation alignment transform. Optics Express 22, 12773-12778 (2014).
(preprint)
C. Wang, X. Zhong, D. B. Ruffner, A. Stutt, L. A. Philips, M. D. Ward and D. G.
Grier. Holographic characterization of protein aggregates. Journal of Pharmaceutical
Sciences 105, 1074-1085 (2016). (preprint)
C. Wang, F. C. Cheong, D. B. Ruffner, X. Zhong, M. D. Ward and D. G. Grier.
Holographic characterization of colloidal fractal aggregates. Soft Matter 12, 8774-8780
(2016). (preprint)
Ogunmolu, O., Gu, X., Jiang, S., & Gans, N. (2016). Nonlinear systems
identification using deep dynamic neural networks. Cornell University Library. Retrieved
October 20, 2016, from https://arxiv.org/abs/1610.01439.
Szegedy, C., Toshev, A., & Erhan, D. (2013). Deep neural networks for object
detection. NIPS Proceedings, 26. Retrieved November 20, 2016, from
https://papers.nips.cc/paper/5207-deep-neural-networks-for-object-detection.

Vous aimerez peut-être aussi