ترغب بنشر مسار تعليمي؟ اضغط هنا

FlatCam is a thin form-factor lensless camera that consists of a coded mask placed on top of a bare, conventional sensor array. Unlike a traditional, lens-based camera where an image of the scene is directly recorded on the sensor pixels, each pixel in FlatCam records a linear combination of light from multiple scene elements. A computational algorithm is then used to demultiplex the recorded measurements and reconstruct an image of the scene. FlatCam is an instance of a coded aperture imaging system; however, unlike the vast majority of related work, we place the coded mask extremely close to the image sensor that can enable a thin system. We employ a separable mask to ensure that both calibration and image reconstruction are scalable in terms of memory requirements and computational complexity. We demonstrate the potential of the FlatCam design using two prototypes: one at visible wavelengths and one at infrared wavelengths.
Sparse approximations using highly over-complete dictionaries is a state-of-the-art tool for many imaging applications including denoising, super-resolution, compressive sensing, light-field analysis, and object recognition. Unfortunately, the applic ability of such methods is severely hampered by the computational burden of sparse approximation: these algorithms are linear or super-linear in both the data dimensionality and size of the dictionary. We propose a framework for learning the hierarchical structure of over-complete dictionaries that enables fast computation of sparse representations. Our method builds on tree-based strategies for nearest neighbor matching, and presents domain-specific enhancements that are highly efficient for the analysis of image patches. Contrary to most popular methods for building spatial data structures, out methods rely on shallow, balanced trees with relatively few layers. We show an extensive array of experiments on several applications such as image denoising/superresolution, compressive video/light-field sensing where we practically achieve 100-1000x speedup (with a less than 1dB loss in accuracy).
A modular method was suggested before to recover a band limited signal from the sample and hold and linearly interpolated (or, in general, an nth-order-hold) version of the regular samples. In this paper a novel approach for compensating the distorti on of any interpolation based on modular method has been proposed. In this method the performance of the modular method is optimized by adding only some simply calculated coefficients. This approach causes drastic improvement in terms of signal-to-noise ratios with fewer modules compared to the classical modular method. Simulation results clearly confirm the improvement of the proposed method and also its superior robustness against additive noise.
In this paper, we investigate the problem of designing compact support interpolation kernels for a given class of signals. By using calculus of variations, we simplify the optimization problem from an infinite nonlinear problem to a finite dimensiona l linear case, and then find the optimum compact support function that best approximates a given filter in the least square sense (l2 norm). The benefit of compact support interpolants is the low computational complexity in the interpolation process while the optimum compact support interpolant gaurantees the highest achivable Signal to Noise Ratio (SNR). Our simulation results confirm the superior performance of the proposed splines compared to other conventional compact support interpolants such as cubic spline.
The goal of this paper is to design compact support basis spline functions that best approximate a given filter (e.g., an ideal Lowpass filter). The optimum function is found by minimizing the least square problem ($ell$2 norm of the difference betwe en the desired and the approximated filters) by means of the calculus of variation; more precisely, the introduced splines give optimal filtering properties with respect to their time support interval. Both mathematical analysis and simulation results confirm the superiority of these splines.
A modular method was suggested before to recover a band limited signal from the sample and hold and linearly interpolated (or, in general, an nth-order-hold) version of the regular samples. In this paper a novel approach for compensating the distorti on of any interpolation based on modular method has been proposed. In this method the performance of the modular method is optimized by adding only some simply calculated coefficients. This approach causes drastic improvement in terms of SNRs with fewer modules compared to the classical modular method. Simulation results clearly confirm the improvement of the proposed method and also its superior robustness against additive noise.
It has been recently brought into spotlight that through the exploitation of network coding concepts at physical-layer, the interference property of the wireless media can be proven to be a blessing in disguise. Nonetheless, most of the previous stud ies on this subject have either held unrealistic assumptions about the network properties, thus making them basically theoretical, or have otherwise been limited to fairly simple network topologies. We, on the other hand, believe to have devised a novel scheme, called Real Amplitude Scaling (RAS), that relaxes the aforementioned restrictions, and works with a wider range of network topologies and in circumstances that are closer to practice, for instance in lack of symbol-level synchronization and in the presence of noise, channel distortion and severe interference from other sources. The simulation results confirmed the superior performance of the proposed method in low SNRs, as well as the high SNR limits, where the effect of quantization error in the digital techniques becomes comparable to the channel.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا