Do you want to publish a course? Click here

Objective This research aimed to describe several areas in which AI could play a role in the development of Personalized Medicine and Drug Screening, and the transformations it has created in the field of biology and therapy. It also addressed the l imitations faced by the application of artificial intelligence techniques and make suggestions for further research. Methods We have conducted a comprehensive review of research and papers related to the role of AI in personalized medicine and drug screening, and filtered the list of works for those relevant to this review. Results Artificial Intelligence can play an important role in the development of personalized medicines and drug screening at all clinical phases related to development and implementation of new customized health products, starting with finding the appropriate medicines to testing their usefulness. In addition, expertise in the use of artificial intelligence techniques can play a special role in this regard. Discussion The capacity of AI to enhance decision-making in personalized medicine and drug screening will largely depend on the accuracy of the relevant tests and the ways in which the data produced is stored, aggregated, accessed, and ultimately integrated. Conclusion The review of the relevant literature has revealed that AI techniques can enhance the decision-making process in the field of personalized medicine and drug screening by improving the ways in which produced data is aggregated, accessed, and ultimately integrated. One of the major obstacles in this field is that most hospitals and healthcare centers do not employ AI solutions, due to healthcare professionals lacking the expertise to build successful models using AI techniques and integrating them with clinical workflows.
Compositional, structured models are appealing because they explicitly decompose problems and provide interpretable intermediate outputs that give confidence that the model is not simply latching onto data artifacts. Learning these models is challeng ing, however, because end-task supervision only provides a weak indirect signal on what values the latent decisions should take. This often results in the model failing to learn to perform the intermediate tasks correctly. In this work, we introduce a way to leverage paired examples that provide stronger cues for learning latent decisions. When two related training examples share internal substructure, we add an additional training objective to encourage consistency between their latent decisions. Such an objective does not require external supervision for the values of the latent output, or even the end task, yet provides an additional training signal to that provided by individual training examples themselves. We apply our method to improve compositional question answering using neural module networks on the DROP dataset. We explore three ways to acquire paired questions in DROP: (a) discovering naturally occurring paired examples within the dataset, (b) constructing paired examples using templates, and (c) generating paired examples using a question generation model. We empirically demonstrate that our proposed approach improves both in- and out-of-distribution generalization and leads to correct latent decision predictions.
Mining the causes of political decision-making is an active research area in the field of political science. In the past, most studies have focused on long-term policies that are collected over several decades of time, and have primarily relied on su rveys as the main source of predictors. However, the recent COVID-19 pandemic has given rise to a new political phenomenon, where political decision-making consists of frequent short-term decisions, all on the same controlled topic---the pandemic. In this paper, we focus on the question of how public opinion influences policy decisions, while controlling for confounders such as COVID-19 case increases or unemployment rates. Using a dataset consisting of Twitter data from the 50 US states, we classify the sentiments toward governors of each state, and conduct controlled studies and comparisons. Based on the compiled samples of sentiments, policies, and confounders, we conduct causal inference to discover trends in political decision-making across different states.
Relevance in summarization is typically de- fined based on textual information alone, without incorporating insights about a particular decision. As a result, to support risk analysis of pancreatic cancer, summaries of medical notes may include irrel evant information such as a knee injury. We propose a novel problem, decision-focused summarization, where the goal is to summarize relevant information for a decision. We leverage a predictive model that makes the decision based on the full text to provide valuable insights on how a decision can be inferred from text. To build a summary, we then select representative sentences that lead to similar model decisions as using the full text while accounting for textual non-redundancy. To evaluate our method (DecSum), we build a testbed where the task is to summarize the first ten reviews of a restaurant in support of predicting its future rating on Yelp. DecSum substantially outperforms text-only summarization methods and model-based explanation methods in decision faithfulness and representativeness. We further demonstrate that DecSum is the only method that enables humans to outperform random chance in predicting which restaurant will be better rated in the future.
We offer an approach to explain Decision Tree (DT) predictions by addressing potential conflicts between aspects of these predictions and plausible expectations licensed by background information. We define four types of conflicts, operationalize the ir identification, and specify explanatory schemas that address them. Our human evaluation focused on the effect of explanations on users' understanding of a DT's reasoning and their willingness to act on its predictions. The results show that (1) explanations that address potential conflicts are considered at least as good as baseline explanations that just follow a DT path; and (2) the conflict-based explanations are deemed especially valuable when users' expectations disagree with the DT's predictions.
Post-hoc explanation methods are an important class of approaches that help understand the rationale underlying a trained model's decision. But how useful are they for an end-user towards accomplishing a given task? In this vision paper, we argue the need for a benchmark to facilitate evaluations of the utility of post-hoc explanation methods. As a first step to this end, we enumerate desirable properties that such a benchmark should possess for the task of debugging text classifiers. Additionally, we highlight that such a benchmark facilitates not only assessing the effectiveness of explanations but also their efficiency.
Abstract We study controllable text summarization, which allows users to gain control on a particular attribute (e.g., length limit) of the generated summaries. In this work, we propose a novel training framework based on Constrained Markov Decision Process (CMDP), which conveniently includes a reward function along with a set of constraints, to facilitate better summarization control. The reward function encourages the generation to resemble the human-written reference, while the constraints are used to explicitly prevent the generated summaries from violating user-imposed requirements. Our framework can be applied to control important attributes of summarization, including length, covered entities, and abstractiveness, as we devise specific constraints for each of these aspects. Extensive experiments on popular benchmarks show that our CMDP framework helps generate informative summaries while complying with a given attribute's requirement.1
The purpose of this research is to study the impact of the financing decision (FD) on profitability in the textile industry companies in the Syrian coast during the period (2000- 2016), which are three companies not listed on the Damascus Stock Exc hange. The financing decision was measured in terms of ratio of total debt to total assets (TD), Profitability was measured by return on assets (ROA), return on equity (ROE) and return on capital (ROC). A series of annual financial statements of the three companies were used during the period reviewed (Panel Data). In order to estimate the models of the study, the unit root test was applied to test the stability of the studied variables. After confirming their stability, the regression models were estimated using the normal lower squares method. The study reached several results, the most important of which is that the debt has a negative impact on profitability in different ratios used to measure both debt and profitability.
This study aims to study the extent of applying the two dimensions of the empowerment employees strategy(employees participation in decision-making and the justice and equity of the top management) in the public industrial companies in Latakia. thi s study was applied in The General Organization of tobacco, textile Latakia Company and the General Company of cotton threads of Latakia, To achieve the purpose of study a questionnaire has been made and distributed to research sample which is(310) workers of these companies, (265) of questionnaires out of total sample is returned and ready for analysis by the statistical program (spss) in order to implementing the study goals. In addition to questionnaire, a several interviews were made with the managers and employees to aware precisely the work environment. the main result of this study was, the public industrial companies lacks to minimum required elements of the implementation of the strategy and workers of these companies isn't contented and satisfied with the work environment and that is, doesn't encourage or support workers to take part in decision-making process related to their job, as well as there is injustice in of the work systems followed by top management.
This research was conducted to determine the impact of the factors related to the bank (the liquidity of the bank, the strategy adopted by the bank, the bank's share in the credit market, the material and human resources of the bank) on the decisio n to grant small bank credit in a sample of banks operating in the Syrian coast. The researcher distributed (115) questionnaires on a sample of the employees in the credit department in the banks under study. The number of questionnaires recovered and valid for the analysis was discharged (90). The researcher analyzed the data using the Statistical Analysis Program (SPSS 20). At the end of this research, the researcher reached a number of conclusions, the most important of which is: There is a significant effect on all factors related to the bank on the decision to grant microcredit, and the order of these factors according to the degree of importance and influence on the following form: the strategy adopted by the bank, liquidity of the bank, the material resources of the bank, the human resources of the bank, The bank's share in the credit market. In addition, the researcher presented the following recommendations: Encouraging senior management in the banks under study to formulate a fixed strategy and specific procedures that will help in the success of the decision making process for the granting of microcredit, the need for automated and sophisticated communication networks that allow the flow of information between all departments and departments easily, Provide the necessary material supplies for the completion of work and development of human resources working in the banks under study.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا