Do you want to publish a course? Click here

المتغيرات العشوائية والتوزيعات الاحتمالية

3246   12   23   0.0 ( 0 )
 Publication date 2019
and research's language is العربية
 Created by Shamra Editor




Ask ChatGPT about the research

No English abstract

References used
No references
rate research

Read More

We present in this paper the neutrosophic randomized variables, which are a generalization of the classical random variables obtained from the application of the neutrosophic logic (a new nonclassical logic which was founded by the American philos opher and mathematical Florentin Smarandache, which he introduced as a generalization of fuzzy logic especially the intuitionistic fuzzy logic ) on classical random variables.
The purpose of this research is to study and create a stochastic mathematical model based on a renewable energy source (wind). The question of finding optimal values for the variables of the mathematical model subject to stochastic conditions is one of the random mathematical problems, which require special stochastic methods to solve in the general case.
Emotion Classification is the task of automatically associating a text with a human emotion. State-of-the-art models are usually learned using annotated corpora or rely on hand-crafted affective lexicons. We present an emotion classification model th at does not require a large annotated corpus to be competitive. We experiment with pretrained language models in both a zero-shot and few-shot configuration. We build several of such models and consider them as biased, noisy annotators, whose individual performance is poor. We aggregate the predictions of these models using a Bayesian method originally developed for modelling crowdsourced annotations. Next, we show that the resulting system performs better than the strongest individual model. Finally, we show that when trained on few labelled data, our systems outperform fully-supervised models.
While solving math word problems automatically has received considerable attention in the NLP community, few works have addressed probability word problems specifically. In this paper, we employ and analyse various neural models for answering such wo rd problems. In a two-step approach, the problem text is first mapped to a formal representation in a declarative language using a sequence-to-sequence model, and then the resulting representation is executed using a probabilistic programming system to provide the answer. Our best performing model incorporates general-domain contextualised word representations that were finetuned using transfer learning on another in-domain dataset. We also apply end-to-end models to this task, which bring out the importance of the two-step approach in obtaining correct solutions to probability problems.
In this research we will find a law of the large numbers for random convex – concave closed functions, and generalize some results related to lower semi- continuous functions to similar results concerning the convex– concave functions, and that will be done with using the parent convex functions and the Mosco-epi \ hypo-convergence.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا