Do you want to publish a course? Click here

Fast Mining and Forecasting of Complex Time-Stamped Events

التنبؤ واستخراج الأحداث ذات الطابع الزمني

1514   1   68   0 ( 0 )
 Publication date 2018
and research's language is العربية
 Created by sohil zidan




Ask ChatGPT about the research

Given a heterogeneous social network, can we forecast its future? Can we predict who will start using a given hashtag on twitter? Can we leverage side information, such as who retweets or follows whom, to improve our membership forecasts? We present TENSORCAST, a novel method that forecasts time-evolving networks more accurately than the current state of the art methods by incorporating multiple data sources in coupled tensors. TENSORCAST is (a) scalable, being linearithmic on the number of connections; (b) effective, achieving over 20% improved precision on top-1000 forecasts of community members; (c) general, being applicable to data sources with a different structure. We run our method on multiple real-world networks, including DBLP and a Twitter temporal network with over 310 million nonzeros, where we predict the evolution of the activity of the use of political hashtags.

References used
Y. Matsubara, Y. Sakurai, C. Faloutsos, T. Iwata, and M. Yoshikawa, “Fast mining and forecasting of complex time-stamped events,” in Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2012, pp. 271–279
M. Araujo, P. Ribeiro, C. Faloutsos. " TensorCast: Forecasting with Context using Coupled Tensors", on IEEE International Conference 2017 on Data Mining (ICDM)
rate research

Read More

حظيت نمذجة وتوقع السلاسل الزمنية بأهمية كبيرة في العديد من المجالات التطبيقية كالتنبؤ بالطقس وأسعار العملات ومعدلات استهلاك الوقود والكهرباء، إن توقع السلاسل الزمنية من شأنه أن يزود المنظمات والشركات بالمعلومات الضرورية لاتخاذ القرارات الهامة، وبسبب أهمية هذا المجال من الناحية التطبيقية فإن الكثير من الأعمال البحثية التي جرت ضمنه خلال السنوات الماضية، إضافةً إلى العدد الكبير من النماذج والخوارزميات التي تم اقتراحها في أدب البحث العلمي والتي كان هدفها تحسين كل من الدقة والكفاءة في نمذجة وتوقع السلاسل الزمنية.
We have introduced a new applications for Dynamic Factor Graphs, consisting in topic modeling, text classification and information retrieval. DFGs are tailored here to sequences of time-stamped documents. Based on the auto-encoder architecture, our nonlinear multi-layer model is trained stage-wise to produce increasingly more compact representations of bags-ofwords at the document or paragraph level, thus performing a semantic analysis. It also incorporates simple temporal dynamics on the latent representations, to take advantage of the inherent (hierarchical) structure of sequences of documents, and can simultaneously perform a supervised classification or regression on document labels, which makes our approach unique. Learning this model is done by maximizing the joint likelihood of the encoding, decoding, dynamical and supervised modules, and is possible using an approximate and gradient-based maximum-a-posteriori inference. We demonstrate that by minimizing a weighted cross-entropy loss between his tograms of word occurrences and their reconstruction, we directly minimize the topic model perplexity, and show that our topic model obtains lower perplexity than the Latent Dirichlet Allocation on the NIPS and State of the Union datasets. We illustrate how the dynamical constraints help the learning while enabling to visualize the topic trajectory.
Abductive reasoning starts from some observations and aims at finding the most plausible explanation for these observations. To perform abduction, humans often make use of temporal and causal inferences, and knowledge about how some hypothetical situ ation can result in different outcomes. This work offers the first study of how such knowledge impacts the Abductive NLI task -- which consists in choosing the more likely explanation for given observations. We train a specialized language model LMI that is tasked to generate what could happen next from a hypothetical scenario that evolves from a given event. We then propose a multi-task model MTL to solve the Abductive NLI task, which predicts a plausible explanation by a) considering different possible events emerging from candidate hypotheses -- events generated by LMI -- and b) selecting the one that is most similar to the observed outcome. We show that our MTL model improves over prior vanilla pre-trained LMs fine-tuned on Abductive NLI. Our manual evaluation and analysis suggest that learning about possible next events from different hypothetical scenarios supports abductive inference.
The study aims at comparing ARIMA models and the exponential smoothing method in forecasting. This study also highlights the special and basic concepts of ARIMA model and the exponential smoothing method. The comparison focuses on the ability of both methods to forecast the time series with a narrow range of one point to another and the time series with a long range of one point to another, and also on the different lengths of the forecasting periods. Currency exchange rates of Shekel to American dollar were used to make this comparison in the period between 25/1/2010 to 22/10/2016. In addition, weekly gold prices were considered in the period between 10/1/2010 to 23/10/2016. RMSE standard was used in order to compare between both methods. In this study, the researcher came up with the conclusion that ARIMA models give a better forecasting for the time series with a long range of one point to another and for long term forecasting, but cannot produce a better forecasting for time series with a narrow range of one point to another as in currency exchange prices. On the contrary, exponential smoothing method can give better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while it cannot give better forecasting for long term forecasting periods
In this thesis proposal, we explore the application of event extraction to literary texts. Considering the lengths of literary documents modeling events in different granularities may be more adequate to extract meaningful information, as individual elements contribute little to the overall semantics. We adapt the concept of schemas as sequences of events all describing a single process, connected through shared participants extending it to for multiple schemas in a document. Segmentation of event sequences into schemas is approached by modeling event sequences, on such task as the narrative cloze task, the prediction of missing events in sequences. We propose building on sequences of event embeddings to form schema embeddings, thereby summarizing sections of documents using a single representation. This approach will allow for the comparisons of different sections of documents and entire literary works. Literature is a challenging domain based on its variety of genres, yet the representation of literary content has received relatively little attention.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا