You are here

Mathematics and Computer Science

Category
Technology Name
Briefcase
Scientist
1784
Romantic relationships have major impact on our social, emotional and physical wellbeing. Despite this overwhelming importance, we have only limited understanding of the rules and mechanisms that are at the heart of good relationships. Popular notion holds that increased similarity between relationship...

Romantic relationships have major impact on our social, emotional and physical wellbeing. Despite this overwhelming importance, we have only limited understanding of the rules and mechanisms that are at the heart of good relationships. Popular notion holds that increased similarity between relationship partners is an omen for continued positive relationship quality, although studies of similarity in personality and attitude-measures failed to support this notion. Researchers have found that similarity in emotional characteristics may be more relevant to relationship quality. The sensory system that is most intimately linked to emotion is olfaction. Given this powerful link, Prof. Sobel and his olfaction research group hypothesize that individuals with similar olfactory perception would have good romantic relationships.

The new research observed a remarkably powerful association whereby couples who smell the world in the same way have good romantic relationships, i.e., this one measure explained ~50% of the variance in relationship quality. Thus, olfactory perception, which opens a unique window into the emotional brain, informs us that genuine similarity in primal, non-verbal essence is a component of successful romantic relationships.

Applications


·         Online matchmaking platform

·         Scent-marketing


Advantages


  • High-accuracy prediction of romantic fit and personality traits

  • Straight-forward evaluation method and user interface operation


Technology's Essence


The “SmellSpace” online platform generates individual smell-based identity that can predict one’s personality and smell-based matching score: https://smellspace.com/

The method of perceptual fingerprinting includes:

·      Each user smells the same odors set

·      The user rates the odors using verbal descriptors.

·      The perceived similarity of all possible pairs of odors is calculated and the pairwise similarities form a matrix.

·      Finally, the matrices are correlated across individuals.

+
  • Prof. Noam Sobel
1802
A new signal processing tool for the detection of pulses travelling through media with complex or unknown dispersion properties was developed by the group of Prof. Gal-Yam, originally for detecting radio bursts in astronomical observations. Pulses are applied in various fields such as oil & gas...

A new signal processing tool for the detection of pulses travelling through media with complex or unknown dispersion properties was developed by the group of Prof. Gal-Yam, originally for detecting radio bursts in astronomical observations.
Pulses are applied in various fields such as oil & gas exploration, detection (e.g. sonar, lidar and radar) and communication. When pulses pass through dispersive media, the arrival times at the detector of different frequency components may differ, and as a result the pulse may become degraded (e.g. transformed to a longer pulse with reduced intensity), even to the level of becoming indistinguishable in terms of signal to noise. This problem becomes even more challenging when detecting short pulses that travel through complex or unknown media.
The new method presented here provides a proven and efficient solution that can be applied for different scenarios where short pulses dispersed by complex media are used. 

Applications


  • Detection and surveying technologies- sonar, lidar, radar etc

Advantages


  • Efficient, requires limited computational resources
  • Generic, can be applied to various setups
  • Easily implementable into existing systems

Technology's Essence


The method includes obtaining an input array of cells, each indicating an intensity of a frequency component of the signal at a representative time. A fast dispersion measure transform (FDMT) is applied to concurrently sum the cells of the input array that lie along different dispersion curves, each curve defined by a known non-linear functional form and being uniquely characterized by a time coordinate and by a value of the dispersion measure. Application of FDMT includes initially generating a plurality of sub-arrays, each representing a frequency sub-band and iteratively combining pairs of adjacent sub-arrays in accordance with an addition rule until all of the initially generated plurality of sub-arrays are combined into an output array of the sums, in which a cell of the output array that is indicative of a transmitted pulse is identified.

+
  • Prof. Avishay Gal-Yam
1696
A new method for observing large areas with physically small detectors, which are unable to cover the whole area simultaneously, based on multiplexing several scanned areas onto a single detector unit followed by algorithmic reconstruction of the true field of view. Astronomical observations require...

A new method for observing large areas with physically small detectors, which are unable to cover the whole area simultaneously, based on multiplexing several scanned areas onto a single detector unit followed by algorithmic reconstruction of the true field of view.
Astronomical observations require the ability to detect very weak signals at high spatial resolution. This reflects on the special characteristics of the observation systems; they need to have a large aperture, high resolution detectors and very low system noise. These demands render high costs and complexity.
Our multiplexing and reconstructing method was developed based on the sparse nature of astronomical observations, and it could be implemented in any application in which sporadic data points are to be found against a fixed (whether detailed or blank) background.

Applications


  • Highly efficient telescopes
  • Quick quality assurance systems – fault metrology
  • Implementation in microscopy

Advantages


  • Use of small size detectors
  • Ability to scan large fields (compared to detector size)
  • Maintaining high resolution
  • Significant shortening of scan time
  • Easily applicable to existing systems

Technology's Essence


The method was developed for astronomical observations in which the studied field is immense and the detector size is relatively small and limited. The invention consists of an optical system that directs light (IR, Vis, UV or other) from different locations in the sky to the focal plane of a telescope onto a specific single detector area, creating a multiplexed image in which several portions of the sky are presented collectively.
Such multiplexing is done on each detector unit area with a different set of sky loci.
A reconstruction algorithm was developed to construct sub-observations sets in a method that guarantees unique recovery of the original wide-field image even when objects overlap.

+
  • Prof. Avishay Gal-Yam
1799
A new computer graphics tool for the efficient and robust deformation of 2D images was developed by the group of Prof. Lipman. Space deformation is an important tool in graphics and image processing, with applications ranging from image warping and character animation, to non-rigid registration and...

A new computer graphics tool for the efficient and robust deformation of 2D images was developed by the group of Prof. Lipman.

Space deformation is an important tool in graphics and image processing, with applications ranging from image warping and character animation, to non-rigid registration and shape analysis. Virtually all methods attempt to find maps that possess three key properties: smoothness, injectivity and shape preservation. Furthermore, for the purpose of warping and posing characters, the method should have interactive performance. However, there is no known method that possesses all of these properties.

Previous deformation models can be roughly divided into meshbased and meshless models. Mesh-based maps are predominantly constructed using linear finite elements, and are inherently not smooth, but can be made to look smooth by using highly dense elements. Although the methods for creating maps with controlled distortion exist, they are time-consuming, and dense meshes prohibit their use in an interactive manner. On the other hand, meshless maps are usually defined using smooth bases and hence are smooth themselves. Yet we are unaware of any known technique that ensures their injectivity and/or bounds on their distortion.

The new method presented here bridges the gap between mesh and meshless methods, by providing a generic framework for making any smooth function basis suitable for deformation.

Applications


  • Computer graphics and animation
  • Image registration for medical imaging, satellite imaging and military applications

Advantages


  • Robust, fast, efficient and scalable

  • Generic, can be applied to various scenarios

  • Possesses smoothness, injectivity and shape preservation with interactive performance


Technology's Essence


Deformation od 2D images is accomplished by enabling direct control over the distortion of the Jacobian during optimization, including preservation of orientation (to avoid flips). The method generates maps by constraining the Jacobian on a dense set of ”collocation” points, using an active-set approach. Only a sparse subset of the collocation points needs to be active at every given moment, resulting in fast performance, while retaining the distortion and injectivity guarantees. Furthermore, a precise mathematical relationship between the density of the collocation points, the maximal distortion achieved on them, and the maximal distortion achieved everywhere in the domain of interest is derived.

+
  • Prof. Ronen Ezra Basri
1801
A new image processing tool for transient detection was developed by the group of Prof. Gal-Yam, originally for time-domain observational astronomy.Image sequences are used in various fields, including medical imaging and satellite/airborne imaging. The comparison between images taken at different...

A new image processing tool for transient detection was developed by the group of Prof. Gal-Yam, originally for time-domain observational astronomy.
Image sequences are used in various fields, including medical imaging and satellite/airborne imaging. The comparison between images taken at different conditions (e.g. equipment or configuration, angles, weather and wavelength) can be a highly non-trivial problem, as subtraction artifacts can outnumber real changes between images.
The existing remedy for this problem includes highly complex solutions using machine learning algorithms to narrow the sea of candidates. In some cases, human interpretation of images cannot be avoided, resulting is very long processing times.
The new method presented here provides a proven solution for the subtraction of images taken at varying conditions. The tool can be applied for any type of imaging, allowing fast processing and accurate results.

Applications


  • Satellite/airborne imaging

  • Medical imaging
  • Defect detection

Advantages


  • Fast and automatic

  • Generic, can be applied to various imaging scenarios
  • Easily implementable into existing systems

Technology's Essence


The new method is used for processing at least two N-dimensional data-measurements (DMs) of a physical-property for detecting one or more new-objects and/or a transition of one or more known-objects, in complex constant-background DMs. Generally, the

the method includes: (1) generating a filtered-new-DM by match-filtering a new-DM, respective to impulse response of a reference-DM (2) generating a filtered-reference-DM by match-filtering the reference-DM, respective to impulse response of the new-DM (3) generating an N-dimensional object-indicator (OI) by subtracting the filtered-reference-DM from the filtered-new-DM, or vice versa and (4) generating an N-dimensional data score from the N-dimensional OI, where each of the scores is a probe for existence of an object at a specific N-dimensional location.
+
  • Prof. Avishay Gal-Yam
1765
A new image reconstruction tool based on non-iterative phase information retrieval from a single diffraction pattern was developed by the group of Prof. Oron.  Lensless imaging techniques enable indirect high resolution observation of objects by measuring the intensity of their diffraction patterns....

A new image reconstruction tool based on non-iterative phase information retrieval from a single diffraction pattern was developed by the group of Prof. Oron. 
Lensless imaging techniques enable indirect high resolution observation of objects by measuring the intensity of their diffraction patterns. These techniques utilize radiation in the X-ray regime to image non-periodic objects in sizes that prohibit the use of larger wavelengths. However, retrieving the phase information of the diffraction pattern is not a trivial task, as current methods are divided based on a tradeoff between experimental complexity and computational reconstruction efficiency.
The method described here is suitable for use with existing lensless imaging techniques to provide direct, robust and efficient phase data while requiring reduced computational and experimental complexity. This method, demonstrated in a laboratory setup on 2D objects, is also applicable in 1D. It can be applied to various phase retrieval applications such as coherent diffractive imaging and ultrashort pulse reconstruction

Applications


  • Phase microscopy
  • Signal processing
  • Holography
  • X-ray imaging

Advantages


  • A Generic solution to the phase retrieval problem
  • Non-iterative approach
  • An efficient and noise robust tool

Technology's Essence


The method is based on the fact that the Fourier transform of the diffraction intensity measurement is the autocorrelation of the object. The autocorrelation and cross-correlations of two sufficiently separated objects are spatially distinct. Based on this, the method consists of three main steps: (a) The sum of the objects’ autocorrelations, as well as their cross-correlation, are reconstructed from the Fourier transform of the measured diffraction pattern. (b) The individual objects’ autocorrelations are reconstructed from their sum and the cross-correlation. (c) Using the two intensities and the interference cross term, double-blind Fourier holograph is applied to recover the phase by solving a set of linear equations.

+
  • Prof. Dan Oron
1800
A new software tool used for the removal of artifacts from transcranial magnetic stimulation (TMS) triggered electroencephalography (EEG) was developed by the group of Prof. Moses. The combined use of TMS with EEG allows for a unique measurement of the brain's global response to localized and abrupt...

A new software tool used for the removal of artifacts from transcranial magnetic stimulation (TMS) triggered electroencephalography (EEG) was developed by the group of Prof. Moses.

The combined use of TMS with EEG allows for a unique measurement of the brain's global response to localized and abrupt stimulations. This may allow TMS-EEG to be used as a diagnostic tool for various neurologic and psychiatric conditions.

However, large electric artifacts are induced in the EEG by the TMS, which are unrelated to brain activity and obscure crucial stages of the brain's response. These artifacts are orders of magnitude larger than the physiological brain activity, and persist from a few to hundreds of milliseconds. However, no generally accepted algorithm is available that can remove the artifacts without unintentionally and significally altering physiological information.

The software designed according to the model along with a friendly GUI is a powerful tool for the TMS-EEG field. The software has tested and proven to be effective on real datasets measured on psychiatric patients.

Applications


  • TMS triggered EEG diagnostics

Advantages


  • Easy to use software with a GUI
  • Exposes the full EEG from the brain

Technology's Essence


The new software tool is based on the observation that, contrary to expectation, the decay of the electrode voltage after the TMS pulse is a power law in time rather than an exponential. A model based on two dimensional diffusion of the accumulated charge from the high electric
fields of the TMS in the skin was built. This model reproduces the artifact precisely, including the many perplexing artifact shapes that are seen on the different electrodes. Artifact removal software based on this model exposes the full EEG from the brain, as validated by continuously reconstructing 50Hz signals that are the same magnitude as the brain signals.

+
  • Prof. Elisha Moses
1629
A new unsupervised learning tool for analyzing large datasets using very limited known data via clustering was developed by the group of Prof. Domany. This solution was originally demonstrated for inferring pathway deregulation scores for specific tumor samples on the basis of expression data.Nearly...

A new unsupervised learning tool for analyzing large datasets using very limited known data via clustering was developed by the group of Prof. Domany. This solution was originally demonstrated for inferring pathway deregulation scores for specific tumor samples on the basis of expression data.
Nearly all methods analyze pathway activity in a global “atomistic” manner, based on an entire sample set, not attempting to characterize individual tumors. Other methods use detailed pathway activity mechanism information and other data that is unavailable in a vast majority of cancer datasets.
The new algorithm described here transforms gene-level information into pathway- level information, generating a compact and biologically relevant representation of each sample. This can be used as an effective prognostic and predictive tool that helps healthcare providers to find optimal treatment strategies for cancer patients. Furthermore, this method can be generically used for reducing the degrees of freedom in order to derive meaningful output from multi-dimensional data using limited knowns.

Applications


  • Personalized cancer treatment.
  • A tool for mining insight from large datasets with limited knowns.

Advantages


  • Provides personalized solutions.
  • Can be utilized for rare conditions with very limited known information.
  • Proved on real oncologic datasets.
  • A Generic unsupervised learning tool.

Technology's Essence


The algorithm analyzes NP pathways, one at a time, assigning a score DP(i) to each sample i and pathway P, which estimates the extent to which the behavior of pathway P deviates from normal, in sample i. To determine this pathway deregulation score the expression levels of those dP genes that belong to P using available databases are used. Each sample i is a point in this dP dimensional space; the entire set of samples forms a cloud of points, and the “principal curve” that captures the variation of this cloud is calculated. Then each sample is projected onto this curve. The pathway deregulation score is defined as the distance DP(i), measured along the curve, of the projection of sample i, from the projection of the normal samples.

 

+
  • Prof. Eytan Domany
  • Prof. Eytan Domany
1585
Our scientific team has discovered a method to apply the Gabor Transform to signal processing and data compression. Compared to existing methods that are based on Fourier transform, the new method provides for up to 25% savings in content size for video, audio and images, without any loss in quality...

Our scientific team has discovered a method to apply the Gabor Transform to signal processing and data compression.

Compared to existing methods that are based on Fourier transform, the new method provides for up to 25% savings in content size for video, audio and images, without any loss in quality.

By embracing our method, content providers, ISPs and mobile carriers can achieve major savings in data storage and data transfer costs.

Applications


The method can be used in virtually all applications involving data storage, communication and signal processing. One of the main commercial application is for lossy data compression for video, audio and images. Those types of content constitute the bulk of today’s Internet traffic, and improved compression will generate substantial savings in storage and data transfer costs.

The method also applies to the storage, communication and processing of quantum information and may therefore be expected to have applications in quantum calculations, quantum communication and quantum information processing.


Advantages


Existing data compression methods are based on numerical implementations of the Fourier transform, known as FFT, DCT and similar.

Compared to these methods, Gabor transform method demonstrates a very significant advantage in terms of the size of compressed material.  

The method provides for up to 25% savings in data size, while keeping the same perceived quality of the content.


Technology's Essence


We have discovered the definitive solution to the problem of obtaining accuracy and stability in the Gabor transform.  We realized that there must be an exact informational equivalence between the Gabor transform and the discrete Fourier transform (DFT). The latter is known to provide an exact representation of functions that are band-limited with finite support.  Since the DFT implicitly assumes periodic boundary conditions, to obtain this exact equivalence one needs to modify the Gaussians in the Gabor transform to obey periodic boundary conditions. This leads to Gaussian flexibility with Fourier accuracy --- precisely what has been sought since 1946.

+
  • Prof. David J. Tannor
1647
Novel algorithms developed at the Weizmann Institute of Science for Content-Based Image Retrieval (CBIR) can enhance search engines by crowd-sourcing and improved clustering.Discovering visual categories among collection of images is a long standing challenge in computer vision, which limits images-...

Novel algorithms developed at the Weizmann Institute of Science for Content-Based Image Retrieval (CBIR) can enhance search engines by crowd-sourcing and improved clustering.
Discovering visual categories among collection of images is a long standing challenge in computer vision, which limits images-based search engines. Existing approaches are searching for a common cluster model. They are focused on identifying shared visual properties (such as a shared object) and subsequently grouping the images into meaningful clusters based upon these shared properties. Such methods are likely to fail once encountering a highly variable set of images or a fairly limited number of images per category.
Researchers form Prof. Michal Irani lab suggest a novel approach based on ‘similarity by composition’. This technology detects statistically significant regions which co-occur across images, which reveals strong and meaningful affinities, even if they appear only in few images. The outcome is a reliable cluster in which each image has high affinity to many images in the cluster, and weak affinity to images outside the cluster.

Applications


  • Images search engines - can be applied for collaborative search between users.
  • Detecting abnormalities in medical imaging.
  • Quality assurance in the fields of agriculture, food, pharmaceutical industry etc.
  • Security industry- from counting people up to identifying suspicious acts.
  • Computer games and brain machine interface.

Advantages


• Can be applied to very few images, as well as benchmark datasets, and yields state-of-the-art results.
• Handles large diversity in appearance.
• The search is not a global search, it requires no semantic query, tagging or pre-existing knowledge.
• The multi-images collaboration significantly speeds up the process, reducing the number of random samples and iterations.
• Set of images are obtained in time which is nearly linear in the size of the image collection.


Technology's Essence


In “clustering by composition”, a good cluster is referred as one in which each image can be easily composed using statistically significant pieces from other images in the cluster while is difficult to compose from images outside the cluster. Multiple images exploit their ‘wisdom of crowds’ to further improve the process. Using a collaborative randomized search algorithm images can be composed from each other simultaneously and efficiently. This enables each image to direct the other images where to search for similar regions within the image collection. The resulted sets of images affinities are sparse yet meaningful and reliable.

+
  • Prof. Michal Irani
1571
A novel social behavior monitoring system automatically tracks the precise location of each animal at excellent temporal resolution. This innovative technology provides simultaneous identification of complex social and individual behaviors via an integration of RFID and video surveillance. There is a...

A novel social behavior monitoring system automatically tracks the precise location of each animal at excellent temporal resolution. This innovative technology provides simultaneous identification of complex social and individual behaviors via an integration of RFID and video surveillance.

There is a rapidly growing interest in detecting the molecular substrates of social behavior. This interest is driven by the vast implications of such understanding in both research and the pharmaceutical industry, since some prevalent pathological conditions are mainly characterized by a behavioral deficit or abnormality.

It is extremely challenging to quantify social behavior in a reliable manner. Existing methods struggle to find a balance between objectively quantifying behavior on one hand while enabling a natural, stress-free behavioral estimation on the other hand. Currently, researchers work in a strictly controlled and constrained environment that is estranged and stressful to the animals. The outcome is a highly contaminated measurement of natural behavior. This difficultly becomes increasingly complex when more than one animal is involved as often applied in social behavioral studies.

Applications


  • Rigorous characterization of social organizational deficiencies and evaluation of their severity in animal and human models (for example in autism).
  • An optimized system for estimating the efficacy of clinical treatments.

Advantages


  • Long-term tracking of unlimited number of simultaneously studied animals.
  • Machine based, hence objective and automated quantification of behavior.
  • Excellent spatiotemporal resolution in semi natural environment
  • Flexible- the number, size and distribution of the RFID antennas can be adjusted with different enclosure dimensions.
  • Can be applied from Individual behavioral profile or pairs interactions up to collective social organization of groups.
  • Systematic analysis and classification of basic locomotion up to more complex social

Technology's Essence


Researchers at the Weizmann institute developed a method for tightly controlled monitoring of social behavior in a semi-natural environment. They used integrated and synchronized chip reporting and continuous video postage to precisely locate each individual animal. Using this automated monitoring which provides an exceptional temporal resolution they achieved correct identification of numerous basic individual behaviors as well as complex social behaviors. Such complex behavioral profiles set the basis for subsequent analysis which reveals the formation of a social hierarchy.

+
  • Dr. Tali Kimchi
1021
A method for mapping and correcting optical distortion conferred by live cell specimens in microscopy that cannot be overcome using optical techniques alone can be used both for light microscopy and confocal microscopy. The system determines the 3D refractive index for the samples, and provides a...

A method for mapping and correcting optical distortion conferred by live cell specimens in microscopy that cannot be overcome using optical techniques alone can be used both for light microscopy and confocal microscopy. The system determines the 3D refractive index for the samples, and provides a method for ray tracing, calculation of 3D space variant point spread, and generalized deconvolution.

Applications


Microscopy: The method was developed and applied for light microscopy, and is of critical importance for detection of weak fluorescently labeled molecules (like GFP fusion proteins) in live cells. It may be applicable also to confocal microscopy and other imaging methods like ultrasound, deep ocean sonar imaging, radioactive imaging, non-invasive deep tissue optical probing and photodynamic therapy. Gradient glasses: The determination of the three-dimensional refractive index of samples allows testing and optimization of techniques for production of gradient glasses. Recently continuous refractive index gradient glasses (GRIN, GRADIUM) were introduced, with applications in high quality optics, microlenses, aspherical lenses, plastic molded optics etc. Lenses built from such glasses can be aberration-corrected at a level, which required doublets and triplets using conventional glasses. Optimized performance of such optics requires ray tracing along curved path, as opposed to straight segments between surface borders of homogeneous glass lenses. Curved ray tracing is computation-intensive and dramatically slows down optimization of optical properties. Our algorithm for ray tracing in gradient refractive index eliminates this computational burden.

Technology's Essence


A computerized package to process three-dimensional images from live biological cells and tissues was developed in order to computationally correct specimen induced distortions that cannot be achieved by optical technique. The package includes: 1. Three-dimensional (3D) mapping of the refractive index of the specimen. 2. Fast method for ray tracing through gradient refractive index medium. 3. Three-dimensional space variant point spread function calculation. 4. Generalized three-dimensional deconvolution.

+
  • Prof. Zvi Kam
1250
A robust method of identifying moving or changing objects in a video sequence groups each pixel with other adjacent pixels according to either motion or intensity values. Pixels are then repeatedly regrouped into clusters in a hierarchical manner. As these clusters are regrouped, the motion pattern is...

A robust method of identifying moving or changing objects in a video sequence groups each pixel with other adjacent pixels according to either motion or intensity values. Pixels are then repeatedly regrouped into clusters in a hierarchical manner. As these clusters are regrouped, the motion pattern is refined, until the full pattern is reached.

Applications


These methods for motion-based segmentation may be used in a multitude of applications that need to correctly identify meaningful regions in image sequences and compute their motion. Such applications include:

  1. Surveillance and homeland security - detecting changes, activities, objects.
  2. Medical Imaging - imaging of dynamic tissues.
  3. Quality control in manufacturing, and more.

Technology's Essence


Researchers at the Weizmann Institute of Science have developed a multiscale, motion-based segmentation method which, unlike previous methods, uses the inherent multiple scales of information in images. The method begins by measuring local optical flow at every picture elements (pixels). Then, using algebraic multigrid (AMG) techniques, it assembles together adjacent pixels which are similar in either their motion or intensity values into small aggregates - each pixel being allowed to belong to different aggregates with different weights. These aggregates in turn are assembled into larger aggregates, then still larger, etc., yielding eventually full segments.

As the aggregation process proceeds, the estimation of the motion of each aggregate is refined and ambiguities are resolved. In addition, an adaptive motion model is used to describe the motion of an aggregate, depending on the amount of flow information that is available within each aggregate. In particular, a translation model is used to describe the motion of pixels and small aggregates, switch to an affine model to describe the motion of intermediate sized aggregates, and finally turn to a perspective model to describe aggregates at the coarsest levels of scale. In addition to this, methods for identifying correspondences between aggregates in different images are also being developed. These methods are suitable for image sequences separated by fairly large motion.

+
  • Prof. Ronen Ezra Basri
1447
A cheap and effective solution for protecting RFID tags from power attacks. RFID tags are secure tags present in many applications (e.g. secure passports). They are poised to become the most far-reaching wireless technology since the cell phone, with worldwide revenues expected to reach $2.8 billion in...

A cheap and effective solution for protecting RFID tags from power attacks.

RFID tags are secure tags present in many applications (e.g. secure passports). They are poised to become the most far-reaching wireless technology since the cell phone, with worldwide revenues expected to reach $2.8 billion in 2009. RFID tags were believed to be immune to power analysis attacks since they have no direct connection to an external power supply. However, recent research has shown that they are vulnerable to such attacks, since it is possible to measure their power consumption without actually needing either tag or reader to be physically touched by the attacker. Furthermore, this attack may be carried out even if no data is being transmitted between the tag and the attacker, making the attack very hard to detect. The current invention overcomes these problems by a slight modification of the tag's electronic system, so that it will not be vulnerable to power analysis.

Applications


  • Improved security of RFID tags.

Advantages


  • Simple and cost-effective
  • The design involves changes only to the RF front-end of the tag, making it the quickest to roll-out


Technology's Essence


An RFID system consists of a high-powered reader communicating with a tag using a wireless medium. The reader generates a powerful electromagnetic field around itself and the tag responds to this field. In passive systems, placing a tag inside the reader's field also provides it with the power it needs to operate. According to the inventive concept, the power consumption of the computational element is detached from the power supply of the tag. Thus, the present invention can almost eliminate the power consumption information.

+
  • Prof. Adi Shamir
1522
A method for enhancing the spatial and or temporal resolution (if applicable) of an input signal such as images and videos.   Many imaging devices produce signals of unsatisfactory resolution (e.g. a photo from a cell-phone camera may have low spatial resolution or a video from a web camera may have...

A method for enhancing the spatial and or temporal resolution (if applicable) of an input signal such as images and videos.

 

Many imaging devices produce signals of unsatisfactory resolution (e.g. a photo from a cell-phone camera may have low spatial resolution or a video from a web camera may have both spatial and temporal low resolution). This method applies digital processing to reconstruct more satisfactory high resolution signals.

 

Previous methods for Super-Resolution (SR) require multiple images of the same scene, or else an external database of examples. This method provides the ability to perform SR from a single image (or a single visual source). The algorithm exploits the inherent local data redundancy within visual signals (redundancy both within the same scale, and across different scales).

 

Examples of the methods' capabilities can be found here: http://www.wisdom.weizmann.ac.il/~vision/SingleImageSR.html

 

Applications


  • Enhancing the spatial resolution of images

  • Enhancing the spatial and or temporal resolution of video sequences

  • Enhancing the spatial and or temporal resolution (if applicable) of other signals (e.g., MRI, fMRI, ultrasound, possibly also audio, etc.)

 


Advantages


  • No need for multiple low resolution sources or the use of an external database of examples.

  • Superior results are produced due to exploitation of inherent information in the source signal.


Technology's Essence


The framework combines the power of classical multi image super resolution and example based super resolution. This combined framework can be applied to obtain super resolution from as little as a single low-resolution signal, without any additional external information. The approach is based on an observation that patches in a single natural signal tend to redundantly recur many times inside the signal, both within the same scale, as well as across different scales.

Recurrence of patches within the same scale (at subpixel misalignments) forms the basis for applying the 'classical super resolution' constraints to information from a single signal. Recurrence of patches across different (coarser) scales implicitly provides examples of low-resolution / high-resolution pairs of patches, thus giving rise to 'example-based super-resolution' from a single signal (but without any external database or any prior examples).

+
  • Prof. Michal Irani

Pages