ترغب بنشر مسار تعليمي؟ اضغط هنا

Multisensory object-centric perception, reasoning, and interaction have been a key research topic in recent years. However, the progress in these directions is limited by the small set of objects available -- synthetic objects are not realistic enoug h and are mostly centered around geometry, while real object datasets such as YCB are often practically challenging and unstable to acquire due to international shipping, inventory, and financial cost. We present ObjectFolder, a dataset of 100 virtualized objects that addresses both challenges with two key innovations. First, ObjectFolder encodes the visual, auditory, and tactile sensory data for all objects, enabling a number of multisensory object recognition tasks, beyond existing datasets that focus purely on object geometry. Second, ObjectFolder employs a uniform, object-centric, and implicit representation for each objects visual textures, acoustic simulations, and tactile readings, making the dataset flexible to use and easy to share. We demonstrate the usefulness of our dataset as a testbed for multisensory perception and control by evaluating it on a variety of benchmark tasks, including instance recognition, cross-sensory retrieval, 3D reconstruction, and robotic grasping.
We present design and realization of an ultra-broadband optical spectrometer capable of measuring the spectral intensity of multi-octave-spanning light sources on a single-pulse basis with a dynamic range of up to 8 orders of magnitude. The instrumen t is optimized for the characterization of the temporal structure of femtosecond long electron bunches by analyzing the emitted coherent transition radiation (CTR) spectra. The spectrometer operates within the spectral range of 250nm to 11.35$mu$m, corresponding to 5.5 optical octaves. This is achieved by dividing the signal beam into three spectral groups, each analyzed by a dedicated spectrometer and detector unit. The complete instrument was characterized with regard to wavelength, relative spectral sensitivity, and absolute photo-metric sensitivity, always accounting for the light polarization and comparing different calibration methods. Finally, the capability of the spectrometer is demonstrated with a CTR measurement of a laser wakefield accelerated electron bunch, enabling to determine temporal pulse structures at unprecedented resolution.
Temporal networks serve as abstractions of many real-world dynamic systems. These networks typically evolve according to certain laws, such as the law of triadic closure, which is universal in social networks. Inductive representation learning of tem poral networks should be able to capture such laws and further be applied to systems that follow the same laws but have not been unseen during the training stage. Previous works in this area depend on either network node identities or rich edge attributes and typically fail to extract these laws. Here, we propose Causal Anonymous Walks (CAWs) to inductively represent a temporal network. CAWs are extracted by temporal random walks and work as automatic retrieval of temporal network motifs to represent network dynamics while avoiding the time-consuming selection and counting of those motifs. CAWs adopt a novel anonymization strategy that replaces node identities with the hitting counts of the nodes based on a set of sampled walks to keep the method inductive, and simultaneously establish the correlation between motifs. We further propose a neural-network model CAW-N to encode CAWs, and pair it with a CAW sampling strategy with constant memory and time cost to support online training and inference. CAW-N is evaluated to predict links over 6 real temporal networks and uniformly outperforms previous SOTA methods by averaged 10% AUC gain in the inductive setting. CAW-N also outperforms previous methods in 4 out of the 6 networks in the transductive setting.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا