ترغب بنشر مسار تعليمي؟ اضغط هنا

Artificial Intelligence Enabled Reagent-free Imaging Hematology Analyzer

83   0   0.0 ( 0 )
 نشر من قبل Xin Shu
 تاريخ النشر 2020
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Leukocyte differential test is a widely performed clinical procedure for screening infectious diseases. Existing hematology analyzers require labor-intensive work and a panel of expensive reagents. Here we report an artificial-intelligence enabled reagent-free imaging hematology analyzer (AIRFIHA) modality that can accurately classify subpopulations of leukocytes with minimal sample preparation. AIRFIHA is realized through training a two-step residual neural network using label-free images of separated leukocytes acquired from a custom-built quantitative phase microscope. We validated the performance of AIRFIHA in randomly selected test set and cross-validated it across all blood donors. AIRFIHA outperforms current methods in classification accuracy, especially in B and T lymphocytes, while preserving the natural state of cells. It also shows a promising potential in differentiating CD4 and CD8 cells. Owing to its easy operation, low cost, and strong discerning capability of complex leukocyte subpopulations, we envision AIRFIHA is clinically translatable and can also be deployed in resource-limited settings, e.g., during pandemic situations for the rapid screening of infectious diseases.

قيم البحث

اقرأ أيضاً

With the recent advances of the Internet of Things, and the increasing accessibility of ubiquitous computing resources and mobile devices, the prevalence of rich media contents, and the ensuing social, economic, and cultural changes, computing techno logy and applications have evolved quickly over the past decade. They now go beyond personal computing, facilitating collaboration and social interactions in general, causing a quick proliferation of social relationships among IoT entities. The increasing number of these relationships and their heterogeneous social features have led to computing and communication bottlenecks that prevent the IoT network from taking advantage of these relationships to improve the offered services and customize the delivered content, known as relationship explosion. On the other hand, the quick advances in artificial intelligence applications in social computing have led to the emerging of a promising research field known as Artificial Social Intelligence (ASI) that has the potential to tackle the social relationship explosion problem. This paper discusses the role of IoT in social relationships detection and management, the problem of social relationships explosion in IoT and reviews the proposed solutions using ASI, including social-oriented machine-learning and deep-learning techniques.
Artificial Intelligence (AI) is rapidly becoming integrated into military Command and Control (C2) systems as a strategic priority for many defence forces. The successful implementation of AI is promising to herald a significant leap in C2 agility th rough automation. However, realistic expectations need to be set on what AI can achieve in the foreseeable future. This paper will argue that AI could lead to a fragility trap, whereby the delegation of C2 functions to an AI could increase the fragility of C2, resulting in catastrophic strategic failures. This calls for a new framework for AI in C2 to avoid this trap. We will argue that antifragility along with agility should form the core design principles for AI-enabled C2 systems. This duality is termed Agile, Antifragile, AI-Enabled Command and Control (A3IC2). An A3IC2 system continuously improves its capacity to perform in the face of shocks and surprises through overcompensation from feedback during the C2 decision-making cycle. An A3IC2 system will not only be able to survive within a complex operational environment, it will also thrive, benefiting from the inevitable shocks and volatility of war.
Artificial intelligence (AI)-based methods are showing promise in multiple medical-imaging applications. Thus, there is substantial interest in clinical translation of these methods, requiring in turn, that they be evaluated rigorously. In this paper , our goal is to lay out a framework for objective task-based evaluation of AI methods. We will also provide a list of tools available in the literature to conduct this evaluation. Further, we outline the important role of physicians in conducting these evaluation studies. The examples in this paper will be proposed in the context of PET with a focus on neural-network-based methods. However, the framework is also applicable to evaluate other medical-imaging modalities and other types of AI methods.
The rise of Artificial Intelligence (AI) will bring with it an ever-increasing willingness to cede decision-making to machines. But rather than just giving machines the power to make decisions that affect us, we need ways to work cooperatively with A I systems. There is a vital need for research in AI and Cooperation that seeks to understand the ways in which systems of AIs and systems of AIs with people can engender cooperative behavior. Trust in AI is also key: trust that is intrinsic and trust that can only be earned over time. Here we use the term AI in its broadest sense, as employed by the recent 20-Year Community Roadmap for AI Research (Gil and Selman, 2019), including but certainly not limited to, recent advances in deep learning. With success, cooperation between humans and AIs can build society just as human-human cooperation has. Whether coming from an intrinsic willingness to be helpful, or driven through self-interest, human societies have grown strong and the human species has found success through cooperation. We cooperate in the small -- as family units, with neighbors, with co-workers, with strangers -- and in the large as a global community that seeks cooperative outcomes around questions of commerce, climate change, and disarmament. Cooperation has evolved in nature also, in cells and among animals. While many cases involving cooperation between humans and AIs will be asymmetric, with the human ultimately in control, AI systems are growing so complex that, even today, it is impossible for the human to fully comprehend their reasoning, recommendations, and actions when functioning simply as passive observers.
156 - Ryosuke Ota 2021
Positron emission tomography, like many other tomographic imaging modalities, relies on an image reconstruction step to produce cross-sectional images from projection data. Detection and localization of the back-to-back annihilation photons produced by positron-electron annihilation defines the trajectories of these photons, which when combined with tomographic reconstruction algorithms, permits recovery of the distribution of positron-emitting radionuclides. Here we produce cross-sectional images directly from the detected coincident annihilation photons, without using a reconstruction algorithm. Ultra-fast radiation detectors with a resolving time averaging 32 picoseconds measured the difference in arrival time of pairs of annihilation photons, localizing the annihilation site to 4.8 mm. This is sufficient to directly generate an image without reconstruction and without the geometric and sampling constraints that normally present for tomographic imaging systems.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا