High-Speed Camera Technology
Nearly all color sensors follow the same principle (according to it´s inventor Dr.Bryce E. Bayer).
The light sensitive cells or pixels on the sensor are only capable of distinguishing different levels of light. For this reason tiny color filters (red,green and blue) are placed in front of the pixel as part of the production process.
In a subsequent step of image processing the filtered output values are combined to a “color pixel” again.
To adapt closer to the perception of the human eye (which is much more green-sensitive than to other colors), twice as many green filters are used.
Burst Trigger Mode
Generally a trigger event indicates the camera when to start recording, after a predefined amount of time (or when the memory is full) the recording stops.
Depending on the application yet another trigger event tells the camera when to terminate the recording.
In Burst Trigger Mode however the camera records as long and as often as the trigger is active (comparable to the triggering mechanism of a machine gun).
CCD / CMOS comparison
Abbreviations for the two main sensor technologies, describing the inner structure of the chip:
„CMOS“: complementary metal-oxide semiconductor
„CCD“: charge coupled device
A CCD-sensor provides a determined electrical charge per pixel, i.e. a certain amount of electrons according to the previous exposure.
These have to be captured pixel by pixel with a subsequent electronic circuit, converted into a voltage quantity and recalculated into a binary value.
This operation is rather time consuming. In addition the whole frame has to be grabbed, which requires comprehensive postprocessing.
CMOS sensors can be produced cheaper and offer the possibility of onboard preprocessing, the information of every pixel can be provided
in a digitised mode.
Thus the camera may be designed smaller and random acces to particular parts of the image (“ROI”, region of interest) is possible.
Needing less external circuits results in reduced power consumtion of the camera, the stored frames can be read out much faster.
Dynamic Range Adjustment
The human eye has a very extensive dynamic range, i.e. can evaluate very low lighting conditions (like candle- or starlight) as well as extreme light impressions (reflected sunlight on a water surface).
This corresponds to a (logarhithmic) dynamic range of 90dB.That means, two objects with 1,000,000,000 times different quantity of light can both be seen clearly.
Unlike this, a CMOS camera has a linear dynamic range of about 60dB which equals a ratio of 1:1000.
If for instance a recording setup requires to identify dim component labels with large welding reflections, image details within the reflection area can not be seen.
Cameras with Dynamic Range Adjustment enable the user to adjust the linear response in certain areas: overexposed objects become darker without loosing intensitiy on the dark ones.
Thus minimal variations of luminosity can be detected, even in areas
of intense reflective light.
Fixed Pattern Noise (FPN)
Every single pixel or photodiode in a CMOS camera has a construction related tolerance.
Even without any exposure to light the diodes generate slightly varying output values.
To avoid a corruption of the image, a process similar to the white balance in digital photography compares a reference picture with a dark frame.
This frame contains only the detected differences and is used to correct the subsequent images of the sensor.
Only after this kind of postprocessing e.g. a plain white area is displayed homogenously white.
Gigabit Ethernet (GigE)
This data transfer technology allows the transmission among various devices (server, printer, mass storage, cameras) within a network.
While standard Ethernet is to slow for the transfer of comprehensive image data, Gigabit Ethernet (GigE) with a maximum transfer rate of 1000Mbit/s or 1 Gigabit per second ensures a dependable image transfer in machine vision cameras.
GigE Vision is a industrial standard, developed by the AIA (Automated Imaging Association) for high performance machine vision cameras, optimised for the transfer of large amounts of image data.
GigE Vision bases on the network structure of Gigabit Ethernet and includes a hardware interface standard (Gigabit Ethernet) and communication protocolls as well as standardised communication- and controlmodi for cameras.
The GigE Vision camera control is based on a command structure named GenICam.
This establishes a common camera interface to enable communication with third party vision cameras without any customisation.
Multi Sequence Mode
In this mode the available memory of the camera is divided into many individual sequences. Following each trigger event (e.g. keystroke or a light barrier is set off) a predefined number of frames is saved.
In repeatedly occuring events the different variations can be compared and provide a valuable base for the analysis of malfunctions or technical processes.
Even a previously determined amount of frames before and after the trigger event can be saved within every recorded sequence.
In several machine vision applications as motion analysis, positioning or pattern matching it is essential to determine certain edges, outlines or coordinates.
The Sobel filter uses an edge-detection algorithm to detect just those edges and produces a chain of pixels (just on/off) that resembles the edges.
This process allows to cut down the data stream already in the FPGA-chip of the camera for more than 80%. Less data has to be transferred and processed, the transfer rate rises considerably.
Suspend to Memory Mode
The operation of a camera is reduced to the preservation of recorded images.
Due to resulting low power consumtion the charge of the storage battery lasts significantly longer.
This mode is activated either automatically after recording or manually by pressing a button.
Thus the recording memory can be preserved for 24 hours.
ImageBLITZ automatic trigger
To capture an unpredictable or unmeasurable event for "inframe" triggering purpose, Mikrotron invented the ImageBLITZ operation mode.
In most cases no further equipment or elaborate trigger sensing devices for camera control are needed, the picture itself is the trigger.
Within certain limits the ImageBLITZ is adjusted to react only to the expected changes in a predefined area of the picture.