ترغب بنشر مسار تعليمي؟ اضغط هنا

On Clusters that are Separated but Large

102   0   0.0 ( 0 )
 نشر من قبل Sariel Har-Peled
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

$renewcommand{Re}{mathbb{R}}$Given a set $P$ of $n$ points in $Re^d$, consider the problem of computing $k$ subsets of $P$ that form clusters that are well-separated from each other, and each of them is large (cardinality wise). We provide tight upper and lower bounds, and corresponding algorithms, on the quality of separation, and the size of the clusters that can be computed, as a function of $n,d,k,s$, and $Phi$, where $s$ is the desired separation, and $Phi$ is the spread of the point set $P$.

قيم البحث

اقرأ أيضاً

We present new broad-band optical images of some merging Seyfert galaxies that were earlier considered to be non-interacting objects. On our deep images obtained at the Russian 6-m telescope we have detected elongated tidal envelopes belonging to sat ellites debris with a surface R-band brightness about 25-26.5 mag/arcsec^2. These structures are invisible in Sloan Digital Sky Survey (SDSS) pictures because of their photometric limit. We found that 35 per cent of the sample of isolated galaxies has undergone merging during the last 0.5-1 Gyr. Our results suggest that statistic studies based on popular imaging surveys (SDSS or Second Palomar Observatory Sky Survey (POSS-II)) can lead to underestimation of the fraction of minor mergers among galaxies with active nuclei (AGN). This fact impacts on statistics and must be taken into consideration when finding connection between minor/major merging or interactions and nucleus activity.
We simulate the formation of a large X-ray cluster using a fully 3D hydrodynamical code coupled to a Particle-Mesh scheme which models the dark matter component. We focus on a possible decoupling between electrons and ions temperatures. We then solve the energy transfer equations between electrons, ions and neutrals without assuming thermal equilibrium between the three gases (T_e <> T_i <> T_n). We solve self-consistently the chemical equations for an hydrogen/helium primordial plasma without assuming ionization-recombination equilibrium. We find that the electron temperature differs from the true dynamical temperature by 20% at the Virial radius of our simulated cluster. This could lead marginally to an underestimate of the total mass in the outer regions of large X-ray clusters.
Triplet loss is an extremely common approach to distance metric learning. Representations of images from the same class are optimized to be mapped closer together in an embedding space than representations of images from different classes. Much work on triplet losses focuses on selecting the most useful triplets of images to consider, with strategies that select dissimilar examples from the same class or similar examples from different classes. The consensus of previous research is that optimizing with the textit{hardest} negative examples leads to bad training behavior. Thats a problem -- these hardest negatives are literally the cases where the distance metric fails to capture semantic similarity. In this paper, we characterize the space of triplets and derive why hard negatives make triplet loss training fail. We offer a simple fix to the loss function and show that, with this fix, optimizing with hard negative examples becomes feasible. This leads to more generalizable features, and image retrieval results that outperform state of the art for datasets with high intra-class variance.
For each finite classical group $G$, we classify the subgroups of $G$ which act transitively on a $G$-invariant set of subspaces of the natural module, where the subspaces are either totally isotropic or nondegenerate. Our proof uses the classificati on of the maximal factorisations of almost simple groups. As a first application of these results we classify all point-transitive subgroups of automorphisms of finite thick generalised quadrangles.
Models for Visual Question Answering (VQA) are notorious for their tendency to rely on dataset biases, as the large and unbalanced diversity of questions and concepts involved and tends to prevent models from learning to reason, leading them to perfo rm educated guesses instead. In this paper, we claim that the standard evaluation metric, which consists in measuring the overall in-domain accuracy, is misleading. Since questions and concepts are unbalanced, this tends to favor models which exploit subtle training set statistics. Alternatively, naively introducing artificial distribution shifts between train and test splits is also not completely satisfying. First, the shifts do not reflect real-world tendencies, resulting in unsuitable models; second, since the shifts are handcrafted, trained models are specifically designed for this particular setting, and do not generalize to other configurations. We propose the GQA-OOD benchmark designed to overcome these concerns: we measure and compare accuracy over both rare and frequent question-answer pairs, and argue that the former is better suited to the evaluation of reasoning abilities, which we experimentally validate with models trained to more or less exploit biases. In a large-scale study involving 7 VQA models and 3 bias reduction techniques, we also experimentally demonstrate that these models fail to address questions involving infrequent concepts and provide recommendations for future directions of research.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا