Do you want to publish a course? Click here

The study aims to apply the financial ratios of the Altman model to predict financial failure on the private Syrian commercial banks listed on the Damascus Stock Exchange, in addition to identifying the impact of using the Altman model on the returns of each bank’s loan portfolios separately, and to achieve this, the necessary data were collected from the published annual reports. There are 11 banks from the official website of the Damascus Securities Exchange, where the study included the years from 2011-2019, and the independent variable was represented by the Altman model and was measured using financial ratios (profitability - liquidity - financial independence - operational efficiency) and the variable dependent on the returns of loan portfolios It was measured through: Loan portfolio rate of return = total interest and commissions from loans/total loans. The results of the study showed that there was no significant, statistically significant effect of Altman’s model on the rate of return on the loan portfolio of the following private commercial banks: (Bank Audi - Syria, Bank Al-Sharq, Arab Bank-Syria, Fransabank-Syria, Bank of Syria and Overseas, Byblos Bank-Syria, International Bank for Trade and Finance, Bank of Syria and Gulf). And there is a positive, significant, and statistically significant effect of the Altman model on the rate of return on the loan portfolio of the following private commercial banks: (Qatar National Bank - Syria, Bank of Jordan - Syria, Bemo Saudi Fransi Bank).
The present study has been conducted to examine the impact of seven of most important internal factors on stock prices for all listed banks in Dubai and Abu Dhabi stock markets. Pooled Least Square, Fixed Effects (FE), and Random Effects (RE) models have been used to carry out the analysis for data pertaining to 23 banks for a time period between 2014-2017. The aim of the study is to examine the most important internal factors affecting stock prices in the banking sector of United Arab Emirates, and whether internal factors determining stock prices in this sector are the same for Dubai and Abu Dhabi stock markets. The results give evidence of positive and significant impact of Earnings Per Share (EPS) and Dividend Per Share (DPS) on market price for shares, in all markets for the former and only in Abu Dhabi stock market for the later. By contrast, the study reveals a negative impact of Return on Equity (RoE), Dividend Yield (DY), and Price Earnings (P_E) on market price for shares. Even more important, the study gives evidence of differentiated impact of variables representing dividend policies, on market price for shares, between the two markets investigated in United Arab Emirates
The European Quality Model that characterizes this edition is based on the following premise: satisfaction of customers, employees and positive impact on society can all be achieved through leadership, strategic policy, correct management of personne l, effective use of available resources, and correct definition of operations, which ultimately result in For excellence in results. This approach attempts to provide a broad perspective on the concepts of management concerned, which cover areas such as strategic management, or information systems and human resources. Hence, these standards are closely related to the major resources of the institutions and the basic capabilities that control and manage them. Approaches to improving the performance of both business and operations have developed during the last decades, starting with management by objectives and results, passing through total quality control, then total quality management, then six sigma, then the theory of constraints, then re-engineering, the methodology of exclusion of waste, then knowledge management, then electronic supply chain management, and then the integration between the six methodology Sigma and LSS and finally High Performance Organizations. These curricula, some of them focus on performance efficiency, others focus on performance effectiveness, and some focus on developing the knowledge capabilities of the organization and then developing its intellectual capital in order to achieve self-sustainable development. The book focuses on the Six Sigma methodology as a quality measure and improvement program, which was developed by Motorola, which focused on process control to the point of one sigma year or 3.4 defects per million units produced, and this includes identifying the most important factors for quality that are determined by the customer. Through this, process changes are reduced, capabilities are improved, stability is increased, and auxiliary systems are designed, which may be Design for Six Sigma (DFSS) to help achieve the Year Sigma goal.
Discourse segmentation and sentence-level discourse parsing play important roles for various NLP tasks to consider textual coherence. Despite recent achievements in both tasks, there is still room for improvement due to the scarcity of labeled data. To solve the problem, we propose a language model-based generative classifier (LMGC) for using more information from labels by treating the labels as an input while enhancing label representations by embedding descriptions for each label. Moreover, since this enables LMGC to make ready the representations for labels, unseen in the pre-training step, we can effectively use a pre-trained language model in LMGC. Experimental results on the RST-DT dataset show that our LMGC achieved the state-of-the-art F1 score of 96.72 in discourse segmentation. It further achieved the state-of-the-art relation F1 scores of 84.69 with gold EDU boundaries and 81.18 with automatically segmented boundaries, respectively, in sentence-level discourse parsing.
Storytelling, whether via fables, news reports, documentaries, or memoirs, can be thought of as the communication of interesting and related events that, taken together, form a concrete process. It is desirable to extract the event chains that repres ent such processes. However, this extraction remains a challenging problem. We posit that this is due to the nature of the texts from which chains are discovered. Natural language text interleaves a narrative of concrete, salient events with background information, contextualization, opinion, and other elements that are important for a variety of necessary discourse and pragmatics acts but are not part of the principal chain of events being communicated. We introduce methods for extracting this principal chain from natural language text, by filtering away non-salient events and supportive sentences. We demonstrate the effectiveness of our methods at isolating critical event chains by comparing their effect on downstream tasks. We show that by pre-training large language models on our extracted chains, we obtain improvements in two tasks that benefit from a clear understanding of event chains: narrative prediction and event-based temporal question answering. The demonstrated improvements and ablative studies confirm that our extraction method isolates critical event chains.
Earning calls are among important resources for investors and analysts for updating their price targets. Firms usually publish corresponding transcripts soon after earnings events. However, raw transcripts are often too long and miss the coherent str ucture. To enhance the clarity, analysts write well-structured reports for some important earnings call events by analyzing them, requiring time and effort. In this paper, we propose TATSum (Template-Aware aTtention model for Summarization), a generalized neural summarization approach for structured report generation, and evaluate its performance in the earnings call domain. We build a large corpus with thousands of transcripts and reports using historical earnings events. We first generate a candidate set of reports from the corpus as potential soft templates which do not impose actual rules on the output. Then, we employ an encoder model with margin-ranking loss to rank the candidate set and select the best quality template. Finally, the transcript and the selected soft template are used as input in a seq2seq framework for report generation. Empirical results on the earnings call dataset show that our model significantly outperforms state-of-the-art models in terms of informativeness and structure.
Dialogue-based relation extraction (RE) aims to extract relation(s) between two arguments that appear in a dialogue. Because dialogues have the characteristics of high personal pronoun occurrences and low information density, and since most relationa l facts in dialogues are not supported by any single sentence, dialogue-based relation extraction requires a comprehensive understanding of dialogue. In this paper, we propose the TUrn COntext awaRE Graph Convolutional Network (TUCORE-GCN) modeled by paying attention to the way people understand dialogues. In addition, we propose a novel approach which treats the task of emotion recognition in conversations (ERC) as a dialogue-based RE. Experiments on a dialogue-based RE dataset and three ERC datasets demonstrate that our model is very effective in various dialogue-based natural language understanding tasks. In these experiments, TUCORE-GCN outperforms the state-of-the-art models on most of the benchmark datasets. Our code is available at https://github.com/BlackNoodle/TUCORE-GCN.
Hierarchical multi-label text classification (HMTC) deals with the challenging task where an instance can be assigned to multiple hierarchically structured categories at the same time. The majority of prior studies either focus on reducing the HMTC t ask into a flat multi-label problem ignoring the vertical category correlations or exploiting the dependencies across different hierarchical levels without considering the horizontal correlations among categories at the same level, which inevitably leads to fundamental information loss. In this paper, we propose a novel HMTC framework that considers both vertical and horizontal category correlations. Specifically, we first design a loosely coupled graph convolutional neural network as the representation extractor to obtain representations for words, documents, and, more importantly, level-wise representations for categories, which are not considered in previous works. Then, the learned category representations are adopted to capture the vertical dependencies among levels of category hierarchy and model the horizontal correlations. Finally, based on the document embeddings and category embeddings, we design a hybrid algorithm to predict the categories of the entire hierarchical structure. Extensive experiments conducted on real-world HMTC datasets validate the effectiveness of the proposed framework with significant improvements over the baselines.
Training large language models can consume a large amount of energy. We hypothesize that the language model's configuration impacts its energy consumption, and that there is room for power consumption optimisation in modern large language models. To investigate these claims, we introduce a power consumption factor to the objective function, and explore the range of models and hyperparameter configurations that affect power. We identify multiple configuration factors that can reduce power consumption during language model training while retaining model quality.
Aspect-based sentiment analysis (ABSA) task consists of three typical subtasks: aspect term extraction, opinion term extraction, and sentiment polarity classification. These three subtasks are usually performed jointly to save resources and reduce th e error propagation in the pipeline. However, most of the existing joint models only focus on the benefits of encoder sharing between subtasks but ignore the difference. Therefore, we propose a joint ABSA model, which not only enjoys the benefits of encoder sharing but also focuses on the difference to improve the effectiveness of the model. In detail, we introduce a dual-encoder design, in which a pair encoder especially focuses on candidate aspect-opinion pair classification, and the original encoder keeps attention on sequence labeling. Empirical results show that our proposed model shows robustness and significantly outperforms the previous state-of-the-art on four benchmark datasets.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا