Do you want to publish a course? Click here

The study aims to apply the financial ratios of the Altman model to predict financial failure on the private Syrian commercial banks listed on the Damascus Stock Exchange, in addition to identifying the impact of using the Altman model on the returns of each bank’s loan portfolios separately, and to achieve this, the necessary data were collected from the published annual reports. There are 11 banks from the official website of the Damascus Securities Exchange, where the study included the years from 2011-2019, and the independent variable was represented by the Altman model and was measured using financial ratios (profitability - liquidity - financial independence - operational efficiency) and the variable dependent on the returns of loan portfolios It was measured through: Loan portfolio rate of return = total interest and commissions from loans/total loans. The results of the study showed that there was no significant, statistically significant effect of Altman’s model on the rate of return on the loan portfolio of the following private commercial banks: (Bank Audi - Syria, Bank Al-Sharq, Arab Bank-Syria, Fransabank-Syria, Bank of Syria and Overseas, Byblos Bank-Syria, International Bank for Trade and Finance, Bank of Syria and Gulf). And there is a positive, significant, and statistically significant effect of the Altman model on the rate of return on the loan portfolio of the following private commercial banks: (Qatar National Bank - Syria, Bank of Jordan - Syria, Bemo Saudi Fransi Bank).
The present study has been conducted to examine the impact of seven of most important internal factors on stock prices for all listed banks in Dubai and Abu Dhabi stock markets. Pooled Least Square, Fixed Effects (FE), and Random Effects (RE) models have been used to carry out the analysis for data pertaining to 23 banks for a time period between 2014-2017. The aim of the study is to examine the most important internal factors affecting stock prices in the banking sector of United Arab Emirates, and whether internal factors determining stock prices in this sector are the same for Dubai and Abu Dhabi stock markets. The results give evidence of positive and significant impact of Earnings Per Share (EPS) and Dividend Per Share (DPS) on market price for shares, in all markets for the former and only in Abu Dhabi stock market for the later. By contrast, the study reveals a negative impact of Return on Equity (RoE), Dividend Yield (DY), and Price Earnings (P_E) on market price for shares. Even more important, the study gives evidence of differentiated impact of variables representing dividend policies, on market price for shares, between the two markets investigated in United Arab Emirates
The European Quality Model that characterizes this edition is based on the following premise: satisfaction of customers, employees and positive impact on society can all be achieved through leadership, strategic policy, correct management of personne l, effective use of available resources, and correct definition of operations, which ultimately result in For excellence in results. This approach attempts to provide a broad perspective on the concepts of management concerned, which cover areas such as strategic management, or information systems and human resources. Hence, these standards are closely related to the major resources of the institutions and the basic capabilities that control and manage them. Approaches to improving the performance of both business and operations have developed during the last decades, starting with management by objectives and results, passing through total quality control, then total quality management, then six sigma, then the theory of constraints, then re-engineering, the methodology of exclusion of waste, then knowledge management, then electronic supply chain management, and then the integration between the six methodology Sigma and LSS and finally High Performance Organizations. These curricula, some of them focus on performance efficiency, others focus on performance effectiveness, and some focus on developing the knowledge capabilities of the organization and then developing its intellectual capital in order to achieve self-sustainable development. The book focuses on the Six Sigma methodology as a quality measure and improvement program, which was developed by Motorola, which focused on process control to the point of one sigma year or 3.4 defects per million units produced, and this includes identifying the most important factors for quality that are determined by the customer. Through this, process changes are reduced, capabilities are improved, stability is increased, and auxiliary systems are designed, which may be Design for Six Sigma (DFSS) to help achieve the Year Sigma goal.
Cross-attention is an important component of neural machine translation (NMT), which is always realized by dot-product attention in previous methods. However, dot-product attention only considers the pair-wise correlation between words, resulting in dispersion when dealing with long sentences and neglect of source neighboring relationships. Inspired by linguistics, the above issues are caused by ignoring a type of cross-attention, called concentrated attention, which focuses on several central words and then spreads around them. In this work, we apply Gaussian Mixture Model (GMM) to model the concentrated attention in cross-attention. Experiments and analyses we conducted on three datasets show that the proposed method outperforms the baseline and has significant improvement on alignment quality, N-gram accuracy, and long sentence translation.
Providing pretrained language models with simple task descriptions in natural language enables them to solve some tasks in a fully unsupervised fashion. Moreover, when combined with regular learning from examples, this idea yields impressive few-shot results for a wide range of text classification tasks. It is also a promising direction to improve data efficiency in generative settings, but there are several challenges to using a combination of task descriptions and example-based learning for text generation. In particular, it is crucial to find task descriptions that are easy to understand for the pretrained model and to ensure that it actually makes good use of them; furthermore, effective measures against overfitting have to be implemented. In this paper, we show how these challenges can be tackled: We introduce GenPET, a method for text generation that is based on pattern-exploiting training, a recent approach for combining textual instructions with supervised learning that only works for classification tasks. On several summarization and headline generation datasets, GenPET gives consistent improvements over strong baselines in few-shot settings.
Neural table-to-text generation models have achieved remarkable progress on an array of tasks. However, due to the data-hungry nature of neural models, their performances strongly rely on large-scale training examples, limiting their applicability in real-world applications. To address this, we propose a new framework: Prototype-to-Generate (P2G), for table-to-text generation under the few-shot scenario. The proposed framework utilizes the retrieved prototypes, which are jointly selected by an IR system and a novel prototype selector to help the model bridging the structural gap between tables and texts. Experimental results on three benchmark datasets with three state-of-the-art models demonstrate that the proposed framework significantly improves the model performance across various evaluation metrics.
Metaphors are ubiquitous in natural language, and detecting them requires contextual reasoning about whether a semantic incongruence actually exists. Most existing work addresses this problem using pre-trained contextualized models. Despite their suc cess, these models require a large amount of labeled data and are not linguistically-based. In this paper, we proposed a ContrAstive pre-Trained modEl (CATE) for metaphor detection with semi-supervised learning. Our model first uses a pre-trained model to obtain a contextual representation of target words and employs a contrastive objective to promote an increased distance between target words' literal and metaphorical senses based on linguistic theories. Furthermore, we propose a simple strategy to collect large-scale candidate instances from the general corpus and generalize the model via self-training. Extensive experiments show that CATE achieves better performance against state-of-the-art baselines on several benchmark datasets.
In Arabic Language, diacritics are used to specify meanings as well as pronunciations. However, diacritics are often omitted from written texts, which increases the number of possible meanings and pronunciations. This leads to an ambiguous text and m akes the computational process on undiacritized text more difficult. In this paper, we propose a Linguistic Attentional Model for Arabic text Diacritization (LAMAD). In LAMAD, a new linguistic feature representation is presented, which utilizes both word and character contextual features. Then, a linguistic attention mechanism is proposed to capture the important linguistic features. In addition, we explore the impact of the linguistic features extracted from the text on Arabic text diacritization (ATD) by introducing them to the linguistic attention mechanism. The extensive experimental results on three datasets with different sizes illustrate that LAMAD outperforms the existing state-of-the-art models.
A critical point of multi-document summarization (MDS) is to learn the relations among various documents. In this paper, we propose a novel abstractive MDS model, in which we represent multiple documents as a heterogeneous graph, taking semantic node s of different granularities into account, and then apply a graph-to-sequence framework to generate summaries. Moreover, we employ a neural topic model to jointly discover latent topics that can act as cross-document semantic units to bridge different documents and provide global information to guide the summary generation. Since topic extraction can be viewed as a special type of summarization that summarizes'' texts into a more abstract format, i.e., a topic distribution, we adopt a multi-task learning strategy to jointly train the topic and summarization module, allowing the promotion of each other. Experimental results on the Multi-News dataset demonstrate that our model outperforms previous state-of-the-art MDS models on both Rouge scores and human evaluation, meanwhile learns high-quality topics.
This paper proposes a technique for adding a new source or target language to an existing multilingual NMT model without re-training it on the initial set of languages. It consists in replacing the shared vocabulary with a small language-specific voc abulary and fine-tuning the new embeddings on the new language's parallel data. Some additional language-specific components may be trained to improve performance (e.g., Transformer layers or adapter modules). Because the parameters of the original model are not modified, its performance on the initial languages does not degrade. We show on two sets of experiments (small-scale on TED Talks, and large-scale on ParaCrawl) that this approach performs as well or better as the more costly alternatives; and that it has excellent zero-shot performance: training on English-centric data is enough to translate between the new language and any of the initial languages.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا