Aydogan Ozcan’s UCLA lab is exploring ways that deep neural nets can be used not just to classify images, but to enhance or even reconstruct images
20 May 2019
By Gwen Weerts

The human brain is a very skilled classifier. It can quickly categorize Chicago as a musical, Amelie and Napoleon Dynamite as quirky indie films, and The Shining and Psycho as horror movies.

Netflix has capitalized on the patterns in viewers’ movie preferences to make automated suggestions about what you might like to watch next. So after adding both Mission: Impossible and Jack Reacher to your Netflix queue, you may see a recommendation for both James Bond: Skyfall and Rain Man, which is Netflix’s attempt to suss out whether you are more interested in action movies or Tom Cruise movies, and tailor its future recommendations accordingly. These suggestions are based on neural nets—trained by your behavior patterns—that mimic the human brain’s ability to recognize patterns and then use those patterns to classify and cluster information.

Perhaps more important than Netflix’s on-point “Top Picks for You” list is the work that imaging scientists have done to use machine learning to recognize and classify images. Apple famously introduced facial recognition into iPhoto back in 2009. Since then, the field has matured, and machine learning algorithms can now detect faces of criminals in crowd photos and classify tumors from mammograms.

However, SPIE Fellow Aydogan Ozcan, who leads the Bio- and Nano-Photonics Laboratory at University of California, Los Angeles, thinks that image recognition or classification is just the tip of the iceberg for machine learning. His lab is exploring ways that deep neural nets can be used not just to classify images—that’s old news—but to enhance or even reconstruct images.

For example, holograms contain information about a three-dimensional scene. Because the laws of physics dictate the way that light and matter interact—such as scattering and diffraction—optical engineers and physicists have heretofore reconstructed the original image from that hologram using physics-driven methods. However, this process is computationally demanding, and deep neural nets can expedite it considerably: they can teach themselves, with some supervision, what is happening during the physical reconstruction process and reproduce the outcome.

According to Ozcan, “You take a neural net, you feed in holograms, and you also feed in the corresponding images of what the physical reconstruction should look like. The neural net doesn’t know anything about the wave equation, or how light diffraction occurs, but through these training image pairs, it understands what it should do to reconstruct a hologram.”

The neural net forms a path that is very efficient and powerful. This hyperefficient path means that neural nets are capable of producing cleaner, higher-resolution images than alternative reconstruction approaches. Most crucially, the neural network can transform an image from one modality to another—a magic trick that could be a game changer for pathology and microscopy fields at large.

Virtually stained tissue samples save time, save money

Medical imaging for pathology has in its toolkit numerous modalities that serve specific diagnostic purposes. For example, magnetic resonance imaging, fluorescence microscopy, and optical coherence tomography all have a specific niche and can provide a pathologist or physician with important information about a patient. But sometimes the necessary type of imaging requires special stains or contrast agents, which add time and complexity to the imaging process.

Histopathology (for example, after a biopsy is taken from a patient) is most commonly performed by taking a thin slice of tissue, then staining it (the job of a specialized histotechnologist) to color subcellular features with different tones of a dye. The stained tissue sample is then imaged by a brightfield microscope, and the resulting image sent to a pathologist for inspection. It’s a process that can take two to three hours depending on the type of tissue and stain used. And, because the technician’s time is expensive, the process is expensive. Furthermore, sometimes this tissue-staining process must be done during a surgery, in order to determine the margins of a tumor and guide the surgeon accordingly, which makes the entire process extremely time sensitive.

Ozcan’s group has a better way. They have figured out how to use neural nets to transform images taken with a fluorescence microscope into brightfield images, skipping over the actual staining process entirely.

When excited with a specific wavelength of light, like near ultraviolet, human tissue fluoresces because of naturally embedded fluorophores, which can be imaged with a simple fluorescence microscope. Ozcan’s group took tissue samples and imaged them this way, using a fluorescence microscope, then fed the resulting autofluorescence images into the neural net as input. After this process, an expert technician stained the same samples using different stains (H&E, Masson’s Trichrome, or Jones’ silver stain), and they were imaged again with a brightfield microscope. The resulting brightfield images were then fed into the neural net as the ground truth—asking the neural net to convert the input images into the same. After repeating this process thousands of times, the neural net learned what it was supposed to do: transform autofluorescence images of label-free tissue sections into images that look just like a labeled tissue slice imaged with a brightfield microscope.

In other words, Ozcan virtually stained the tissue samples in real-time, without a technician, without a brightfield microscope.

“This kind of a transformation is nearly impossible to do as a physical process,” says Ozcan. “How to convert autofluorescence into a brightfield equivalent image after the actual chemical staining—it’s a very complicated process to model, let alone form a physical law. But deep neural nets are helping us establish these transformations through images, through very accurately registered image pairs of input output, input output.”

This process could create a paradigm shift in histopathology by shortening the window of time between taking a sample and making a diagnosis from hours to minutes. Since time is money, it means a decrease in cost, and also eliminates the need for a trained histotechnologist. While that may sound like a dystopian sci-fi scenario where a computer takes over the job of a highly trained human, consider that a scarcity of these highly trained technicians can be a bottleneck in resource-strapped countries. Lack of access to a technician can mean no diagnosis for a patient, and thus no treatment.

Ozcan’s team has so far tested his cross-modality imaging on multiple tissue types, including breast, kidney, lung, liver, ovary, salivary, prostate, and thyroid, and successfully imitated the three common histological stains. In blind tests, board-certified pathologists couldn’t tell which images came out of the deep neural net transformation and which ones were coming out of a brightfield microscope.

Virtually stained tissue

The top row shows the label-free images taken with fluorescence microscope. The virtually stained tissue images (row 2) are indistinguishable from chemically stained images (row 3) to the trained eye. Credit: Aydogan Ozcan

A picture may speak a thousand words, but a pixel says the only word you need

Not content with just probing the science of neural nets, or using neural nets for post-processing images, Ozcan wants to explore the ways these image transformations can be incorporated into new scientific instrument designs. “The next generation of instruments, whether it’s a microscope or a sensor or any kind of analyzer, will need to be fundamentally different,” he says. “It needs to be fused with machine learning at the design phase, with how it captures data, how it synthesizes output. The holistic instrument design should be immersed with deep learning.”

Ozcan’s vision of such a holistic design can be best understood by imagining a new camera, designed to recognize a very specific thing, such as a skin lesion. In this scenario, the camera’s job is to help a doctor decide whether a skin lesion should be removed for biopsy or not. Today, that task would be done with a separately optimized imaging instrument that digitizes the tissue image, and then another machine learning algorithm is applied on the resulting data. These two steps are sequentially performed, one after another, and they are designed and optimized individually.

But in this camera-of-the-future scenario, the image isn’t the important piece of information. Neural nets, incorporated into imaging tools and hardware, could allow the entire process to be significantly simplified, for the specific task in mind.

Instead of creating an image of the skin lesion, the camera would need to capture a higher dimension of information from the field of view—perhaps worth only a few hundred pixels—that can then be run through a neural net that has been tasked with specifically classifying those pixels (which do not necessarily need to form an image interpretable by the human eye).

Those final classifications could be represented with just three LEDs: Red (Take the sample. It should go to biopsy.) Yellow (I’m not sure, the doctor should use her expertise.) Green (Don’t touch it. It will be a waste of insurance money and resources.) The end user, in this case the dermatologist, doesn’t need a ten-megapixel image, he needs just three pixels. According to Ozcan, “The instrument will not create a lot of data, just the data you need.”

It’s an interesting vision for the future of medical imaging—one that uses AI to eliminate bottlenecks, reduce costs, and save time, while still leaving final decision-making power in the hands of a trained physician. It’s another step towards a future of democratized medicine—an issue important to Ozcan, as evidenced by his earlier well-known work on mobile microscopy and sensing—where no sick person goes undiagnosed due to lack of access to a specialist or an advanced laboratory.

Medical imaging is just the one of many possible paths to go down in the burgeoning science of machine learning. The capabilities of neural nets are only just beginning to be realized by research groups like Ozcan’s lab at UCLA. “I’m learning so much about optical diffractive neural nets. I’m trying to learn how they work, how to integrate them with electronic neural nets, how to make the most out of optics in general within the context of machine learning,” he says. “How much can we push optical machine learning techniques to create really useful things? For microscopy, for machine vision, for security, for defense, you name it. There’s a lot of stuff to think about and work through. That’s certainly keeping me awake at night.”

That’s a more valid reason for sleepless nights than binge watching James Bond on Netflix. It’s also a better use of AI.