In the past year, several technological advances have transformed expectations of optical microscopes.
Microscopy is growing at a rapid rate as the result of substantial investment in nanotechnology research. Advances in nanotechnology not only support advances in materials technology, they support developments in the semiconductor and medical devices industries. These billions of dollars drive support for advanced microscopy technologies, which are expected to become a $5 to 6 billion market globally by 2018.
In terms of growth rate, optical microscopy has been overtaken by scanning probe, electron and atomic force microscope technologies. Some of these non-optical segments are seeing 18% cumulated average growth rates (CAGR), far beyond the CAGR of 6% for the entire microscopy market, according to the most recent estimates.
This is part due to demand for imaging solutions that can resolve nanoscale features. The microelectronics industry, in particular, requires this capability for development of semiconductor and next-generation electronic devices that rely on more exotic materials, such as carbon nanotubes or graphene.
Optical microscopy’s growth has slowed, but it’s still the largest segment, exceeding $1.2 billion. Inverted benchtop microscopes alone command more than $400 million in market value, which highlights the strength of optical microscopy: versatility.
rom cursory inspection of fabricated components to sensitive real-time confocal, hyperspectral imaging of live cell samples, the optical microscope remains one of the research laboratory’s most important instruments.
Preserving the cell
One of the most important capabilities for any biotechnology laboratory is the non-destructive imaging of live cell samples. Cryo-electron imaging is useful for determining nanoscale biological structures, but to truly understand intra-cellular activities as they happen, high-resolution live cell imaging will continue to be in high demand.
A variety of solutions have been developed to facilitate live cell imaging, and among the most prominent techniques in recent years is super-resolution. Accomplished through specialized optics and photon-counting algorithms, this technique is employed by a variety of microscopy companies, including Leica Microsystems, Carl Zeiss MicroImaging GmbH and Nikon Instruments Inc.
However, the small focal scale of super-resolution microscopes isn’t always suitable for imaging deep within transparent tissues or within whole organisms. A different type of fluorescence technique comes into play: light sheet fluorescence microscopy (LSFM). LSFM is an example of a new technique pioneered by investigators who needed to produce well-registered serial section images that could be used for 3-D reconstruction of tissue structures. These scientists needed a high acquisition rate and an illumination scheme that could handle large and highly scattering samples, like cell cultures.
LSFM first appeared in 1994 as orthogonal plane fluorescence optical sectioning microscopy and later improved with selective/single plane illumination microscopy (SPIM), which uses a laser beam, or circular beam, focused in single direction. The selective illumination imaging of a single plane reduces photo damage considerably, allowing long-term observations.
The first commercial LSFM instrument, the Lightsheet Z.1 from Carl Zeiss MicroImaging GmbH, Thornwood, N.Y., was introduced late in 2012 and won a 2013 R&D 100 Award. Zeiss’ contribution to the existing technology was integrating its know-how in sample handling and image analysis, as well as introducing a number of improvements to the technique. The Lightsheet Z.1 features a vertical sample presentation, which allows the user to rotate the sample for the acquisition of the point-spread function in isotropic form, rather than the axial acquisition normal fluorescence illumination. This improves resolution, but also delivers views of a spheroidal sample from four 3-D viewpoints, set at 90-degree intervals. Zeiss ZEN software employs a landmark-oriented algorithm to generate the views, and then applies a mean-intensity algorithm to generate a fusion of the views. Optimizations for large-area live cell imaging include CMOS detectors rather than point detectors and the use of a water-immersion objective lens. LSFM’s value is in observing samples like developing embryos—at subcellular resolution—over the course of hours or days. Because it’s essentially a variation of epi-fluorescence imaging, LSFM has been successfully combined with super-resolution technique.
The fusion of technologies
The optical microscope has long been the starting point for much more specialized instruments. Technologies such confocal imaging, stimulated depletion emission, hyperspectral optics and x-ray imaging all begin, fundamentally, with a standard upright or inverted optical lightpath.
As with many hybrids, however, the marriage of methods leaves room for improvements. Modularity, such as that seen in Olympus America Inc.’s (Center Valley, Pa.) IX series of inverted microscopes, has allowed for several subsystems to be integrated in the lightpath of the instrument, without interfering with research workflows or benchtop space. These subsystems can be exchanged at will, letting one instrument become as useful as several.
But other systems, such as dedicated Raman microscopes, can benefit from further, more permanent, integration. Traditionally, Raman instruments were built from an inverted microscope. The addition of an excitation laser, light detector, filters and a spectrometer or monochromator allows the acquisition of spectral light, which in turn supplies important chemical information that eludes conventional microscopy.
Early Raman microscopes were bulky, because these systems were separately packaged and linked with fiber-optic couplings. More recent Raman instruments, such as the IDRaman micro introduced in September 2013 by Ocean Optics, Dunedin, Fla., have adopted new focusing technology that gives it a compact 4-in by 14-in by 11-in footprint. The instrument’s OneFocus feature optimizes the instrument for Raman sampling using the same focal plane for collecting images and Raman signals. This is an improvement over the sampling approach taken by systems based on traditional inverted microscopes and fiber-optic couplings. The ability to focus for optimal Raman sampling while viewing a quality image of the sample simplifies the often tedious and inexact process of acquiring data from a specific structure or location on a sample. OneFocus also enhances data collection for applications where only a single layer of material is applied to the surface, such as graphene or surface enhanced Raman spectroscopy (SERS).
The IDRaman micro is available with either a 532-nm or a 785-nm laser for excitation and offers the option for a high-resolution detector with four wavenumber resolution acquiring data from 200 to 2,000 wavenumbers, or a wide-range system with eight wavenumber resolution acquiring data from 200 to 3,200 wavenumbers. Its 3-MP imager uses epi-illumination techniques and interchangeable objectives, allowing users to adjust spot size and optical magnification.
Microscopy companies are making greater efforts at system integration on a variety of levels, and individual components reflect this change. Olympus, for example, began marketing a new type of image sensor a little more than a year ago that offers both color and monochrome imaging in a single unit. Designed for researchers who previously needed to align two cameras to a dual port, the system is essentially two complete, high-performance parfocal and parcentric cameras in one. Geared for the acquisition of both fluorescence and color brightfield, the DP80 dual CCD device helps increase functionality on a single microscope stand.
The basis for this innovation is pixel-shifting technology that also serves to allow Olympus’ DP73 3-CCD camera to capture three-color RGB images within a single pixel to improve resolution. In the case of the DP80 camera, the shifting pixels offer 12.5 MP of color brightfield imaging and 14-bit monochrome imaging for fluorescence. cellSens software, which performs post-imaging optimization tools, captures monochrome data and color information automatically from the two camera sensors in rapid sequence.
The future of optical
A greater understanding of the dynamics and quantum behavior of light has allowed the development of super-resolution technology. In addition, advances in mathematics and algorithms have greatly improved real-time and fluorescence imaging.
Researchers have been working in these areas for decades. But occasionally, a new technology appears that takes a unique approach to improving optical imaging. In 2013, TAG Optics Inc., Princeton, N.J., introduced the second version of its high-speed varifocal optical device, the TAGLens 2.0. The lens was hailed by R&D Magazine’s R&D 100 Awards judges and others for its ability to offer the fastest adaptive lens speeds on the market. How it does this and what sets TAGLens apart as an innovative technology: The device is designed and optimized to harness small density changes in fluids caused by sound. The lens uses these changes in density variations to change the index of refraction, shaping light as it goes through the device.
According to TAG Optics’ CEO Christian Theriault, acoustical light-shaping technology permits scanning speeds 1,000 times faster than any other existing adaptive lens technology.
The underlying technology was invented at Princeton Univ. in 2007. A sinusoidal electronic input in the radio frequency (RF) range creates a standing sound wave, which drives piezoelectric element vibrations. The TAGLens 2.0’s focal length and effective aperture are controlled electronically by adjusting the amplitude and the frequency of the RF driving signal. Essentially, says Theriault, this allows the device to feature multiple effective apertures and the ability to be operated with multiple frequencies.
Speed improvements are translated into faster scanning routines for 3-D volumes or highly profiled surfaces, which now test the limits of high-throughput optical scanning instruments, which often can’t perform real-time routines because of the time needed to change focal position or control the depth-of-field independently of the magnification.