No Arabic abstract
The proposed black-hole finder mission EXIST will consist of multiple wide-field hard X-ray coded-aperture telescopes. The high science goals set for the mission require innovations in telescope design. In particular, wide energy band coverage and fine angular resolution require relatively thick coded masks and thick detectors compared to their pixel size, which may introduce mask self-collimation and depth-induced image blurring with conventional design approaches. Previously we proposed relatively simple solutions to these potential problems: radial hole for mask selfcollimation and cathode depth sensing detector for image blurring. We have now performed laboratory experiments to explore the potential of these two techniques. The experimental results show that the radial hole mask greatly alleviates mask self-collimation and a ~1 mm resolution depth-sensitive detector scheme can be relatively easily achieved for the large scale required for EXIST.
{it ProtoEXIST1} is a pathfinder for the {it EXIST-HET}, a coded aperture hard X-ray telescope with a 4.5 m$^2$ CZT detector plane a 90$times$70 degree field of view to be flown as the primary instrument on the {it EXIST} mission and is intended to monitor the full sky every 3 h in an effort to locate GRBs and other high energy transients. {it ProtoEXIST1} consists of a 256 cm$^2$ tiled CZT detector plane containing 4096 pixels composed of an 8$times$8 array of individual 1.95 cm $times$ 1.95 cm $times$ 0.5 cm CZT detector modules each with a 8 $times$ 8 pixilated anode configured as a coded aperture telescope with a fully coded $10^circtimes10^circ$ field of view employing passive side shielding and an active CsI anti-coincidence rear shield, recently completed its maiden flight out of Ft. Sumner, NM on the 9th of October 2009. During the duration of its 6 hour flight on-board calibration of the detector plane was carried out utilizing a single tagged 198.8 nCi Am-241 source along with the simultaneous measurement of the background spectrum and an observation of Cygnus X-1. Here we recount the events of the flight and report on the detector performance in a near space environment. We also briefly discuss {it ProtoEXIST2}: the next stage of detector development which employs the {it NuSTAR} ASIC enabling finer (32$times$32) anode pixilation. When completed {it ProtoEXIST2} will consist of a 256 cm$^2$ tiled array and be flown simultaneously with the ProtoEXIST1 telescope.
The capture of scintillation light emitted by liquid Argon and Xenon under molecular excitations by charged particles is still a challenging task. Here we present a first attempt to design a device able to grab sufficiently high luminosity in order to reconstruct the path of ionizing particles. This preliminary study is based on the use of masks to encode the light signal combined with single-photon detectors. In this respect, the proposed system is able to detect tracks over focal distances of about tens of centimeters. From numerical simulations it emerges that it is possible to successfully decode and recognize signals, even complex, with a relatively limited number of acquisition channels. Such innovative technique can be very fruitful in a new generation of detectors devoted to neutrino physics and dark matter search. Indeed the introduction of coded masks combined with SiPM detectors is proposed for a liquid-Argon target in the Near Detector of the DUNE experiment.
High resolution images are widely used in our daily life, whereas high-speed video capture is challenging due to the low frame rate of cameras working at the high resolution mode. Digging deeper, the main bottleneck lies in the low throughput of existing imaging systems. Towards this end, snapshot compressive imaging (SCI) was proposed as a promising solution to improve the throughput of imaging systems by compressive sampling and computational reconstruction. During acquisition, multiple high-speed images are encoded and collapsed to a single measurement. After this, algorithms are employed to retrieve the video frames from the coded snapshot. Recently developed Plug-and-Play (PnP) algorithms make it possible for SCI reconstruction in large-scale problems. However, the lack of high-resolution encoding systems still precludes SCIs wide application. In this paper, we build a novel hybrid coded aperture snapshot compressive imaging (HCA-SCI) system by incorporating a dynamic liquid crystal on silicon and a high-resolution lithography mask. We further implement a PnP reconstruction algorithm with cascaded denoisers for high quality reconstruction. Based on the proposed HCA-SCI system and algorithm, we achieve a 10-mega pixel SCI system to capture high-speed scenes, leading to a high throughput of 4.6G voxels per second. Both simulation and real data experiments verify the feasibility and performance of our proposed HCA-SCI scheme.
FlatCam is a thin form-factor lensless camera that consists of a coded mask placed on top of a bare, conventional sensor array. Unlike a traditional, lens-based camera where an image of the scene is directly recorded on the sensor pixels, each pixel in FlatCam records a linear combination of light from multiple scene elements. A computational algorithm is then used to demultiplex the recorded measurements and reconstruct an image of the scene. FlatCam is an instance of a coded aperture imaging system; however, unlike the vast majority of related work, we place the coded mask extremely close to the image sensor that can enable a thin system. We employ a separable mask to ensure that both calibration and image reconstruction are scalable in terms of memory requirements and computational complexity. We demonstrate the potential of the FlatCam design using two prototypes: one at visible wavelengths and one at infrared wavelengths.
Activity detection from first-person videos (FPV) captured using a wearable camera is an active research field with potential applications in many sectors, including healthcare, law enforcement, and rehabilitation. State-of-the-art methods use optical flow-based hybrid techniques that rely on features derived from the motion of objects from consecutive frames. In this work, we developed a two-stream network, the emph{SegCodeNet}, that uses a network branch containing video-streams with color-coded semantic segmentation masks of relevant objects in addition to the original RGB video-stream. We also include a stream-wise attention gating that prioritizes between the two streams and a frame-wise attention module that prioritizes the video frames that contain relevant features. Experiments are conducted on an FPV dataset containing $18$ activity classes in office environments. In comparison to a single-stream network, the proposed two-stream method achieves an absolute improvement of $14.366%$ and $10.324%$ for averaged F1 score and accuracy, respectively, when average results are compared for three different frame sizes $224times224$, $112times112$, and $64times64$. The proposed method provides significant performance gains for lower-resolution images with absolute improvements of $17%$ and $26%$ in F1 score for input dimensions of $112times112$ and $64times64$, respectively. The best performance is achieved for a frame size of $224times224$ yielding an F1 score and accuracy of $90.176%$ and $90.799%$ which outperforms the state-of-the-art Inflated 3D ConvNet (I3D) cite{carreira2017quo} method by an absolute margin of $4.529%$ and $2.419%$, respectively.