You are here

Mathematics and Computer Science

Category
Technology Name
Briefcase
Scientist
1799
A new computer graphics tool for the efficient and robust deformation of 2D images was developed by the group of Prof. Lipman. Space deformation is an important tool in graphics and image processing, with applications ranging from image warping and character animation, to non-rigid registration and...

A new computer graphics tool for the efficient and robust deformation of 2D images was developed by the group of Prof. Lipman.

Space deformation is an important tool in graphics and image processing, with applications ranging from image warping and character animation, to non-rigid registration and shape analysis. Virtually all methods attempt to find maps that possess three key properties: smoothness, injectivity and shape preservation. Furthermore, for the purpose of warping and posing characters, the method should have interactive performance. However, there is no known method that possesses all of these properties.

Previous deformation models can be roughly divided into meshbased and meshless models. Mesh-based maps are predominantly constructed using linear finite elements, and are inherently not smooth, but can be made to look smooth by using highly dense elements. Although the methods for creating maps with controlled distortion exist, they are time-consuming, and dense meshes prohibit their use in an interactive manner. On the other hand, meshless maps are usually defined using smooth bases and hence are smooth themselves. Yet we are unaware of any known technique that ensures their injectivity and/or bounds on their distortion.

The new method presented here bridges the gap between mesh and meshless methods, by providing a generic framework for making any smooth function basis suitable for deformation.

Applications


  • Computer graphics and animation
  • Image registration for medical imaging, satellite imaging and military applications

Advantages


  • Robust, fast, efficient and scalable

  • Generic, can be applied to various scenarios

  • Possesses smoothness, injectivity and shape preservation with interactive performance


Technology's Essence


Deformation od 2D images is accomplished by enabling direct control over the distortion of the Jacobian during optimization, including preservation of orientation (to avoid flips). The method generates maps by constraining the Jacobian on a dense set of ”collocation” points, using an active-set approach. Only a sparse subset of the collocation points needs to be active at every given moment, resulting in fast performance, while retaining the distortion and injectivity guarantees. Furthermore, a precise mathematical relationship between the density of the collocation points, the maximal distortion achieved on them, and the maximal distortion achieved everywhere in the domain of interest is derived.

+
  • Prof. Ronen Ezra Basri
1801
A new image processing tool for transient detection was developed by the group of Prof. Gal-Yam, originally for time-domain observational astronomy.Image sequences are used in various fields, including medical imaging and satellite/airborne imaging. The comparison between images taken at different...

A new image processing tool for transient detection was developed by the group of Prof. Gal-Yam, originally for time-domain observational astronomy.
Image sequences are used in various fields, including medical imaging and satellite/airborne imaging. The comparison between images taken at different conditions (e.g. equipment or configuration, angles, weather and wavelength) can be a highly non-trivial problem, as subtraction artifacts can outnumber real changes between images.
The existing remedy for this problem includes highly complex solutions using machine learning algorithms to narrow the sea of candidates. In some cases, human interpretation of images cannot be avoided, resulting is very long processing times.
The new method presented here provides a proven solution for the subtraction of images taken at varying conditions. The tool can be applied for any type of imaging, allowing fast processing and accurate results.

Applications


  • Satellite/airborne imaging

  • Medical imaging
  • Defect detection

Advantages


  • Fast and automatic

  • Generic, can be applied to various imaging scenarios
  • Easily implementable into existing systems

Technology's Essence


The new method is used for processing at least two N-dimensional data-measurements (DMs) of a physical-property for detecting one or more new-objects and/or a transition of one or more known-objects, in complex constant-background DMs. Generally, the

the method includes: (1) generating a filtered-new-DM by match-filtering a new-DM, respective to impulse response of a reference-DM (2) generating a filtered-reference-DM by match-filtering the reference-DM, respective to impulse response of the new-DM (3) generating an N-dimensional object-indicator (OI) by subtracting the filtered-reference-DM from the filtered-new-DM, or vice versa and (4) generating an N-dimensional data score from the N-dimensional OI, where each of the scores is a probe for existence of an object at a specific N-dimensional location.
+
  • Prof. Avishay Gal-Yam
1765
A new image reconstruction tool based on non-iterative phase information retrieval from a single diffraction pattern was developed by the group of Prof. Oron.  Lensless imaging techniques enable indirect high resolution observation of objects by measuring the intensity of their diffraction patterns....

A new image reconstruction tool based on non-iterative phase information retrieval from a single diffraction pattern was developed by the group of Prof. Oron. 
Lensless imaging techniques enable indirect high resolution observation of objects by measuring the intensity of their diffraction patterns. These techniques utilize radiation in the X-ray regime to image non-periodic objects in sizes that prohibit the use of larger wavelengths. However, retrieving the phase information of the diffraction pattern is not a trivial task, as current methods are divided based on a tradeoff between experimental complexity and computational reconstruction efficiency.
The method described here is suitable for use with existing lensless imaging techniques to provide direct, robust and efficient phase data while requiring reduced computational and experimental complexity. This method, demonstrated in a laboratory setup on 2D objects, is also applicable in 1D. It can be applied to various phase retrieval applications such as coherent diffractive imaging and ultrashort pulse reconstruction

Applications


  • Phase microscopy
  • Signal processing
  • Holography
  • X-ray imaging

Advantages


  • A Generic solution to the phase retrieval problem
  • Non-iterative approach
  • An efficient and noise robust tool

Technology's Essence


The method is based on the fact that the Fourier transform of the diffraction intensity measurement is the autocorrelation of the object. The autocorrelation and cross-correlations of two sufficiently separated objects are spatially distinct. Based on this, the method consists of three main steps: (a) The sum of the objects’ autocorrelations, as well as their cross-correlation, are reconstructed from the Fourier transform of the measured diffraction pattern. (b) The individual objects’ autocorrelations are reconstructed from their sum and the cross-correlation. (c) Using the two intensities and the interference cross term, double-blind Fourier holograph is applied to recover the phase by solving a set of linear equations.

+
  • Prof. Dan Oron
1800
A new software tool used for the removal of artifacts from transcranial magnetic stimulation (TMS) triggered electroencephalography (EEG) was developed by the group of Prof. Moses. The combined use of TMS with EEG allows for a unique measurement of the brain's global response to localized and abrupt...

A new software tool used for the removal of artifacts from transcranial magnetic stimulation (TMS) triggered electroencephalography (EEG) was developed by the group of Prof. Moses.

The combined use of TMS with EEG allows for a unique measurement of the brain's global response to localized and abrupt stimulations. This may allow TMS-EEG to be used as a diagnostic tool for various neurologic and psychiatric conditions.

However, large electric artifacts are induced in the EEG by the TMS, which are unrelated to brain activity and obscure crucial stages of the brain's response. These artifacts are orders of magnitude larger than the physiological brain activity, and persist from a few to hundreds of milliseconds. However, no generally accepted algorithm is available that can remove the artifacts without unintentionally and significally altering physiological information.

The software designed according to the model along with a friendly GUI is a powerful tool for the TMS-EEG field. The software has tested and proven to be effective on real datasets measured on psychiatric patients.

Applications


  • TMS triggered EEG diagnostics

Advantages


  • Easy to use software with a GUI
  • Exposes the full EEG from the brain

Technology's Essence


The new software tool is based on the observation that, contrary to expectation, the decay of the electrode voltage after the TMS pulse is a power law in time rather than an exponential. A model based on two dimensional diffusion of the accumulated charge from the high electric
fields of the TMS in the skin was built. This model reproduces the artifact precisely, including the many perplexing artifact shapes that are seen on the different electrodes. Artifact removal software based on this model exposes the full EEG from the brain, as validated by continuously reconstructing 50Hz signals that are the same magnitude as the brain signals.

+
  • Prof. Elisha Moses
1802
A new signal processing tool for the detection of pulses travelling through media with complex or unknown dispersion properties was developed by the group of Prof. Gal-Yam, originally for detecting radio bursts in astronomical observations. Pulses are applied in various fields such as oil & gas...

A new signal processing tool for the detection of pulses travelling through media with complex or unknown dispersion properties was developed by the group of Prof. Gal-Yam, originally for detecting radio bursts in astronomical observations.
Pulses are applied in various fields such as oil & gas exploration, detection (e.g. sonar, lidar and radar) and communication. When pulses pass through dispersive media, the arrival times at the detector of different frequency components may differ, and as a result the pulse may become degraded (e.g. transformed to a longer pulse with reduced intensity), even to the level of becoming indistinguishable in terms of signal to noise. This problem becomes even more challenging when detecting short pulses that travel through complex or unknown media.
The new method presented here provides a proven and efficient solution that can be applied for different scenarios where short pulses dispersed by complex media are used. 

Applications


  • Detection and surveying technologies- sonar, lidar, radar etc

Advantages


  • Efficient, requires limited computational resources
  • Generic, can be applied to various setups
  • Easily implementable into existing systems

Technology's Essence


The method includes obtaining an input array of cells, each indicating an intensity of a frequency component of the signal at a representative time. A fast dispersion measure transform (FDMT) is applied to concurrently sum the cells of the input array that lie along different dispersion curves, each curve defined by a known non-linear functional form and being uniquely characterized by a time coordinate and by a value of the dispersion measure. Application of FDMT includes initially generating a plurality of sub-arrays, each representing a frequency sub-band and iteratively combining pairs of adjacent sub-arrays in accordance with an addition rule until all of the initially generated plurality of sub-arrays are combined into an output array of the sums, in which a cell of the output array that is indicative of a transmitted pulse is identified.

+
  • Prof. Avishay Gal-Yam
1696
A new method for observing large areas with physically small detectors, which are unable to cover the whole area simultaneously, based on multiplexing several scanned areas onto a single detector unit followed by algorithmic reconstruction of the true field of view. Astronomical observations require...

A new method for observing large areas with physically small detectors, which are unable to cover the whole area simultaneously, based on multiplexing several scanned areas onto a single detector unit followed by algorithmic reconstruction of the true field of view.
Astronomical observations require the ability to detect very weak signals at high spatial resolution. This reflects on the special characteristics of the observation systems; they need to have a large aperture, high resolution detectors and very low system noise. These demands render high costs and complexity.
Our multiplexing and reconstructing method was developed based on the sparse nature of astronomical observations, and it could be implemented in any application in which sporadic data points are to be found against a fixed (whether detailed or blank) background.

Applications


  • Highly efficient telescopes
  • Quick quality assurance systems – fault metrology
  • Implementation in microscopy

Advantages


  • Use of small size detectors
  • Ability to scan large fields (compared to detector size)
  • Maintaining high resolution
  • Significant shortening of scan time
  • Easily applicable to existing systems

Technology's Essence


The method was developed for astronomical observations in which the studied field is immense and the detector size is relatively small and limited. The invention consists of an optical system that directs light (IR, Vis, UV or other) from different locations in the sky to the focal plane of a telescope onto a specific single detector area, creating a multiplexed image in which several portions of the sky are presented collectively.
Such multiplexing is done on each detector unit area with a different set of sky loci.
A reconstruction algorithm was developed to construct sub-observations sets in a method that guarantees unique recovery of the original wide-field image even when objects overlap.

+
  • Prof. Avishay Gal-Yam
1571
A novel social behavior monitoring system automatically tracks the precise location of each animal at excellent temporal resolution. This innovative technology provides simultaneous identification of complex social and individual behaviors via an integration of RFID and video surveillance. There is a...

A novel social behavior monitoring system automatically tracks the precise location of each animal at excellent temporal resolution. This innovative technology provides simultaneous identification of complex social and individual behaviors via an integration of RFID and video surveillance.

There is a rapidly growing interest in detecting the molecular substrates of social behavior. This interest is driven by the vast implications of such understanding in both research and the pharmaceutical industry, since some prevalent pathological conditions are mainly characterized by a behavioral deficit or abnormality.

It is extremely challenging to quantify social behavior in a reliable manner. Existing methods struggle to find a balance between objectively quantifying behavior on one hand while enabling a natural, stress-free behavioral estimation on the other hand. Currently, researchers work in a strictly controlled and constrained environment that is estranged and stressful to the animals. The outcome is a highly contaminated measurement of natural behavior. This difficultly becomes increasingly complex when more than one animal is involved as often applied in social behavioral studies.

Applications


  • Rigorous characterization of social organizational deficiencies and evaluation of their severity in animal and human models (for example in autism).
  • An optimized system for estimating the efficacy of clinical treatments.

Advantages


  • Long-term tracking of unlimited number of simultaneously studied animals.
  • Machine based, hence objective and automated quantification of behavior.
  • Excellent spatiotemporal resolution in semi natural environment
  • Flexible- the number, size and distribution of the RFID antennas can be adjusted with different enclosure dimensions.
  • Can be applied from Individual behavioral profile or pairs interactions up to collective social organization of groups.
  • Systematic analysis and classification of basic locomotion up to more complex social

Technology's Essence


Researchers at the Weizmann institute developed a method for tightly controlled monitoring of social behavior in a semi-natural environment. They used integrated and synchronized chip reporting and continuous video postage to precisely locate each individual animal. Using this automated monitoring which provides an exceptional temporal resolution they achieved correct identification of numerous basic individual behaviors as well as complex social behaviors. Such complex behavioral profiles set the basis for subsequent analysis which reveals the formation of a social hierarchy.

+
  • Dr. Tali Kimchi
1585
Our scientific team has discovered a method to apply the Gabor Transform to signal processing and data compression. Compared to existing methods that are based on Fourier transform, the new method provides for up to 25% savings in content size for video, audio and images, without any loss in quality...

Our scientific team has discovered a method to apply the Gabor Transform to signal processing and data compression.

Compared to existing methods that are based on Fourier transform, the new method provides for up to 25% savings in content size for video, audio and images, without any loss in quality.

By embracing our method, content providers, ISPs and mobile carriers can achieve major savings in data storage and data transfer costs.

Applications


The method can be used in virtually all applications involving data storage, communication and signal processing. One of the main commercial application is for lossy data compression for video, audio and images. Those types of content constitute the bulk of today’s Internet traffic, and improved compression will generate substantial savings in storage and data transfer costs.

The method also applies to the storage, communication and processing of quantum information and may therefore be expected to have applications in quantum calculations, quantum communication and quantum information processing.


Advantages


Existing data compression methods are based on numerical implementations of the Fourier transform, known as FFT, DCT and similar.

Compared to these methods, Gabor transform method demonstrates a very significant advantage in terms of the size of compressed material.  

The method provides for up to 25% savings in data size, while keeping the same perceived quality of the content.


Technology's Essence


We have discovered the definitive solution to the problem of obtaining accuracy and stability in the Gabor transform.  We realized that there must be an exact informational equivalence between the Gabor transform and the discrete Fourier transform (DFT). The latter is known to provide an exact representation of functions that are band-limited with finite support.  Since the DFT implicitly assumes periodic boundary conditions, to obtain this exact equivalence one needs to modify the Gaussians in the Gabor transform to obey periodic boundary conditions. This leads to Gaussian flexibility with Fourier accuracy --- precisely what has been sought since 1946.

+
  • Prof. David J. Tannor
1647
Novel algorithms developed at the Weizmann Institute of Science for Content-Based Image Retrieval (CBIR) can enhance search engines by crowd-sourcing and improved clustering.Discovering visual categories among collection of images is a long standing challenge in computer vision, which limits images-...

Novel algorithms developed at the Weizmann Institute of Science for Content-Based Image Retrieval (CBIR) can enhance search engines by crowd-sourcing and improved clustering.
Discovering visual categories among collection of images is a long standing challenge in computer vision, which limits images-based search engines. Existing approaches are searching for a common cluster model. They are focused on identifying shared visual properties (such as a shared object) and subsequently grouping the images into meaningful clusters based upon these shared properties. Such methods are likely to fail once encountering a highly variable set of images or a fairly limited number of images per category.
Researchers form Prof. Michal Irani lab suggest a novel approach based on ‘similarity by composition’. This technology detects statistically significant regions which co-occur across images, which reveals strong and meaningful affinities, even if they appear only in few images. The outcome is a reliable cluster in which each image has high affinity to many images in the cluster, and weak affinity to images outside the cluster.

Applications


  • Images search engines - can be applied for collaborative search between users.
  • Detecting abnormalities in medical imaging.
  • Quality assurance in the fields of agriculture, food, pharmaceutical industry etc.
  • Security industry- from counting people up to identifying suspicious acts.
  • Computer games and brain machine interface.

Advantages


• Can be applied to very few images, as well as benchmark datasets, and yields state-of-the-art results.
• Handles large diversity in appearance.
• The search is not a global search, it requires no semantic query, tagging or pre-existing knowledge.
• The multi-images collaboration significantly speeds up the process, reducing the number of random samples and iterations.
• Set of images are obtained in time which is nearly linear in the size of the image collection.


Technology's Essence


In “clustering by composition”, a good cluster is referred as one in which each image can be easily composed using statistically significant pieces from other images in the cluster while is difficult to compose from images outside the cluster. Multiple images exploit their ‘wisdom of crowds’ to further improve the process. Using a collaborative randomized search algorithm images can be composed from each other simultaneously and efficiently. This enables each image to direct the other images where to search for similar regions within the image collection. The resulted sets of images affinities are sparse yet meaningful and reliable.

+
  • Prof. Michal Irani
1574
Spinal cord injuries (SCI) patients are deprived from using their abdominal muscles in order to facilitate an efficient cough and clear their airways. Functional Electric Stimulation (FES) may provide the abdominal contraction that is required; however, in order for such a device to fully substitute...

Spinal cord injuries (SCI) patients are deprived from using their abdominal muscles in order to facilitate an efficient cough and clear their airways. Functional Electric Stimulation (FES) may provide the abdominal contraction that is required; however, in order for such a device to fully substitute the help of a caregiver, it must be easily activated and precisely synchronized with the patient's intent to cough in order to replace the voluntary cough.
The present inventors present a device, which integrates nasal air signals, in the form of active sniff, with triggering of FES at a precisely timed onset following glottis closure. Tetraplegic patients that used this system produced a cough that is comparable to a physiotherapist-assisted cough and reported a major improvement in quality of life.
This device offers a fresh approach to cough assistance which combines superior comfort and efficiency, perfectly adjusted to the needs of spinal cord trauma.

Applications


  • Self controlled – enables quality of life, independence, intimacy.
  • Simple, compacted and portable.
  • Enables "smart coughing" – a patient's needs or commands are used to modify parameters synchronizing the cough.
  • Nasal air sensors are considered less intrusive and more reliable, than currently used mouth air sensors.
  • Potentially low cost system.

Advantages


  • Intuitive and easy to learn and control for any computer user. 
  • Simultaneous use of different controllers to improve and diversify gaming applications. 
  • Non-invasive and safe device

Technology's Essence


Nasal air controller technology and FES are integrated using an Arduino microcontroller device. This is an open-source electronics prototyping platform based on flexible, easy-to-use hardware and software.
The microcontroller receives analog inputs from pressure sensors and is programmed to trigger the FES. A command to cough from the patient may be two consecutive sniffs (nasal air signals). In addition, the system can potentially identify intent to cough using nasal air signals, without the need for a direct command.
One of the most important parameters of the invention is that the FES will be given during glottis closure. The system continuously samples the nasal air signal and defines glottis closure as a plateau in the signal. A machine learning element determines a typical glottis closure duration for each patient, providing the FES within this frame.
The FES is than given to the abdomen in order to facilitate coughing. The duration of the FES may be determined by a feedback which may be a value of emitted CO2, value of EMG, volume of sounds etc.
Finally, the device may be further down-sized to enable mobility and suit outdoor use.

+
  • Prof. Noam Sobel
1629
A new unsupervised learning tool for analyzing large datasets using very limited known data via clustering was developed by the group of Prof. Domany. This solution was originally demonstrated for inferring pathway deregulation scores for specific tumor samples on the basis of expression data.Nearly...

A new unsupervised learning tool for analyzing large datasets using very limited known data via clustering was developed by the group of Prof. Domany. This solution was originally demonstrated for inferring pathway deregulation scores for specific tumor samples on the basis of expression data.
Nearly all methods analyze pathway activity in a global “atomistic” manner, based on an entire sample set, not attempting to characterize individual tumors. Other methods use detailed pathway activity mechanism information and other data that is unavailable in a vast majority of cancer datasets.
The new algorithm described here transforms gene-level information into pathway- level information, generating a compact and biologically relevant representation of each sample. This can be used as an effective prognostic and predictive tool that helps healthcare providers to find optimal treatment strategies for cancer patients. Furthermore, this method can be generically used for reducing the degrees of freedom in order to derive meaningful output from multi-dimensional data using limited knowns.

Applications


  • Personalized cancer treatment.
  • A tool for mining insight from large datasets with limited knowns.

Advantages


  • Provides personalized solutions.
  • Can be utilized for rare conditions with very limited known information.
  • Proved on real oncologic datasets.
  • A Generic unsupervised learning tool.

Technology's Essence


The algorithm analyzes NP pathways, one at a time, assigning a score DP(i) to each sample i and pathway P, which estimates the extent to which the behavior of pathway P deviates from normal, in sample i. To determine this pathway deregulation score the expression levels of those dP genes that belong to P using available databases are used. Each sample i is a point in this dP dimensional space; the entire set of samples forms a cloud of points, and the “principal curve” that captures the variation of this cloud is calculated. Then each sample is projected onto this curve. The pathway deregulation score is defined as the distance DP(i), measured along the curve, of the projection of sample i, from the projection of the normal samples.

 

+
  • Prof. Eytan Domany
  • Prof. Eytan Domany
1461
Bidirectional Similarity offers a new approach to summarization of visual data (images and video) based on optimization of well defined similarity measure. Common visual summarization methods (mainly scaling and cropping) suffer from significant deficiencies related to image quality and loss of...

Bidirectional Similarity offers a new approach to summarization of visual data (images and video) based on optimization of well defined similarity measure.

Common visual summarization methods (mainly scaling and cropping) suffer from significant deficiencies related to image quality and loss of important data. Many attempts have been made to overcome these problems, however, success was very limited and neither has become commercially applicable.

Using an optimization problem approach and state-of-the-art algorithms, our method provides superior summarization of visual data as well as a measure to determine similarity, which together provides a basis for a wide range of applications in image and video processing.

Applications


The technology can be utilized in any application where an image size is changed or were similarity of images is important. Sample applications include:

  • Image processing software (as an added-on feature)

  • Resizing software

  • Creation of Thumbnails

  • Adjustment of images to different screen sizes (TV-cellular etc.)

  • Optimization of space-time patches in video processing

  • Image montages

  • Automatic image & video cropping

  • Images synthesis, photo reshuffling and many more


Advantages


While Bidirectional Similarity summarization will not replace existing technologies in all applications, it enjoys significant advantages that will offer better results in many of them. Among its advantages, the Bidirectional Similarity summarization:

  • Provides better resolution and in many cases reduces distortion compared to scaling
  • Reduces (or avoids) loss of important data compared to cropping
  • Allows importance-based summarization even when important information is widespread and hard to define
  • Uses quantitative objective similarity measure
  • Offers a generic tool for different image processing applications (synthesis, montage, reshuffling etc.)

Technology's Essence


Bidirectional Similarity Summarization is a patent-pending image and video processing method, which maximizes “completeness” and “coherence” between images and videos, using a measure for quantifying how “good” a visual summary is.

The algorithm uses and iterative process, gradually reducing the image size, while keeping all source patches in the target image, without introducing visual artifacts that are not in the input data. Using a Similarity Index, the algorithm identifies redundant information and compromise the “less important” data while generating the required target image or video.

The Similarity Index, which stands in the heart of the Bidirectional Similarity summarization algorithm, can be utilized by its own, as an objective function within other optimization processes, as well as in comparing the quality of visual summaries generated by different methods

+
  • Prof. Michal Irani
1021
A method for mapping and correcting optical distortion conferred by live cell specimens in microscopy that cannot be overcome using optical techniques alone can be used both for light microscopy and confocal microscopy. The system determines the 3D refractive index for the samples, and provides a...

A method for mapping and correcting optical distortion conferred by live cell specimens in microscopy that cannot be overcome using optical techniques alone can be used both for light microscopy and confocal microscopy. The system determines the 3D refractive index for the samples, and provides a method for ray tracing, calculation of 3D space variant point spread, and generalized deconvolution.

Applications


Microscopy: The method was developed and applied for light microscopy, and is of critical importance for detection of weak fluorescently labeled molecules (like GFP fusion proteins) in live cells. It may be applicable also to confocal microscopy and other imaging methods like ultrasound, deep ocean sonar imaging, radioactive imaging, non-invasive deep tissue optical probing and photodynamic therapy. Gradient glasses: The determination of the three-dimensional refractive index of samples allows testing and optimization of techniques for production of gradient glasses. Recently continuous refractive index gradient glasses (GRIN, GRADIUM) were introduced, with applications in high quality optics, microlenses, aspherical lenses, plastic molded optics etc. Lenses built from such glasses can be aberration-corrected at a level, which required doublets and triplets using conventional glasses. Optimized performance of such optics requires ray tracing along curved path, as opposed to straight segments between surface borders of homogeneous glass lenses. Curved ray tracing is computation-intensive and dramatically slows down optimization of optical properties. Our algorithm for ray tracing in gradient refractive index eliminates this computational burden.

Technology's Essence


A computerized package to process three-dimensional images from live biological cells and tissues was developed in order to computationally correct specimen induced distortions that cannot be achieved by optical technique. The package includes: 1. Three-dimensional (3D) mapping of the refractive index of the specimen. 2. Fast method for ray tracing through gradient refractive index medium. 3. Three-dimensional space variant point spread function calculation. 4. Generalized three-dimensional deconvolution.

+
  • Prof. Zvi Kam
1250
A robust method of identifying moving or changing objects in a video sequence groups each pixel with other adjacent pixels according to either motion or intensity values. Pixels are then repeatedly regrouped into clusters in a hierarchical manner. As these clusters are regrouped, the motion pattern is...

A robust method of identifying moving or changing objects in a video sequence groups each pixel with other adjacent pixels according to either motion or intensity values. Pixels are then repeatedly regrouped into clusters in a hierarchical manner. As these clusters are regrouped, the motion pattern is refined, until the full pattern is reached.

Applications


These methods for motion-based segmentation may be used in a multitude of applications that need to correctly identify meaningful regions in image sequences and compute their motion. Such applications include:

  1. Surveillance and homeland security - detecting changes, activities, objects.
  2. Medical Imaging - imaging of dynamic tissues.
  3. Quality control in manufacturing, and more.

Technology's Essence


Researchers at the Weizmann Institute of Science have developed a multiscale, motion-based segmentation method which, unlike previous methods, uses the inherent multiple scales of information in images. The method begins by measuring local optical flow at every picture elements (pixels). Then, using algebraic multigrid (AMG) techniques, it assembles together adjacent pixels which are similar in either their motion or intensity values into small aggregates - each pixel being allowed to belong to different aggregates with different weights. These aggregates in turn are assembled into larger aggregates, then still larger, etc., yielding eventually full segments.

As the aggregation process proceeds, the estimation of the motion of each aggregate is refined and ambiguities are resolved. In addition, an adaptive motion model is used to describe the motion of an aggregate, depending on the amount of flow information that is available within each aggregate. In particular, a translation model is used to describe the motion of pixels and small aggregates, switch to an affine model to describe the motion of intermediate sized aggregates, and finally turn to a perspective model to describe aggregates at the coarsest levels of scale. In addition to this, methods for identifying correspondences between aggregates in different images are also being developed. These methods are suitable for image sequences separated by fairly large motion.

+
  • Prof. Ronen Ezra Basri
1447
A cheap and effective solution for protecting RFID tags from power attacks. RFID tags are secure tags present in many applications (e.g. secure passports). They are poised to become the most far-reaching wireless technology since the cell phone, with worldwide revenues expected to reach $2.8 billion in...

A cheap and effective solution for protecting RFID tags from power attacks.

RFID tags are secure tags present in many applications (e.g. secure passports). They are poised to become the most far-reaching wireless technology since the cell phone, with worldwide revenues expected to reach $2.8 billion in 2009. RFID tags were believed to be immune to power analysis attacks since they have no direct connection to an external power supply. However, recent research has shown that they are vulnerable to such attacks, since it is possible to measure their power consumption without actually needing either tag or reader to be physically touched by the attacker. Furthermore, this attack may be carried out even if no data is being transmitted between the tag and the attacker, making the attack very hard to detect. The current invention overcomes these problems by a slight modification of the tag's electronic system, so that it will not be vulnerable to power analysis.

Applications


  • Improved security of RFID tags.

Advantages


  • Simple and cost-effective
  • The design involves changes only to the RF front-end of the tag, making it the quickest to roll-out


Technology's Essence


An RFID system consists of a high-powered reader communicating with a tag using a wireless medium. The reader generates a powerful electromagnetic field around itself and the tag responds to this field. In passive systems, placing a tag inside the reader's field also provides it with the power it needs to operate. According to the inventive concept, the power consumption of the computational element is detached from the power supply of the tag. Thus, the present invention can almost eliminate the power consumption information.

+
  • Prof. Adi Shamir

Pages