Do you want to publish a course? Click here

Hierarchical Prediction Based on Two-Level Affinity Propagation Clustering for Bike-Sharing System

التنبؤ بالتسلسل الهرمي استنادًا إلى مجموعات نشر التقارب على مستويين لنظام مشاركة الدراجة (ترجمة)

477   0   5   0.0 ( 0 )
 Publication date 2018
and research's language is العربية
 Created by Odai Mohammed




Ask ChatGPT about the research

Bike-sharing system is a new transportation that has emerged in recent years. More and more people will choose to ride bicycle sharing at home and abroad. While we use shared bicycles conveniently, there are also unfavorable factors that affect the customer's riding experience in the bicycle-sharing system. Due to the rents or returns of bikes at different stations in different periods are imbalanced, the bikes in the system need to be rebalanced frequently. Therefore, there is an urgent need to predict and reallocate the bikes in advance. In this paper, we propose a hierarchical forecasting model that predicts the number of rents or returns to each station cluster in a future period to achieve redistribution. First, we propose a two-level afnity propagation clustering algorithm to divide bike stations into groups where migration trends of bikes among stations as well as geographical locations information are considered. Based on the two-level hierarchy of stations, the total rents of bikes are predicted. Then, we use a multi-similarity-based inference model to forecast the migration proportion of inter-cluster and across cluster, based on which the rents or returns of bikes at each station can be deduced. In order to verify the effectiveness of our two-level hierarchical prediction model, we validate it on the bike-sharing system of New York City and compare the results with those of other popular methods obtained. Experimental results demonstrate the superiority over other methods.


Artificial intelligence review:
Research summary
تناقش الورقة البحثية نظام مشاركة الدراجات كوسيلة نقل حديثة ظهرت في السنوات الأخيرة، وتتناول مشكلة التوزيع غير المتوازن للدراجات بين المحطات المختلفة. تقترح الورقة نموذج تنبؤ هرمي يعتمد على خوارزمية نشر التقارب على مستويين لتجميع محطات الدراجات في مجموعات، مع الأخذ في الاعتبار اتجاهات حركة الدراجات والمواقع الجغرافية. يتم استخدام نموذج استدلال قائم على التشابه لتوقع نسبة الإيجارات والعوائد بين المجموعات. تم التحقق من فعالية النموذج المقترح باستخدام بيانات نظام مشاركة الدراجات في مدينة نيويورك، وأظهرت النتائج التجريبية تفوق النموذج على الأساليب الأخرى في تحسين دقة التنبؤ.
Critical review
تقدم الورقة نموذجًا مبتكرًا لحل مشكلة التوزيع غير المتوازن للدراجات في أنظمة مشاركة الدراجات، ولكن هناك بعض النقاط التي يمكن تحسينها. أولاً، تعتمد الورقة بشكل كبير على البيانات التاريخية والأرصاد الجوية، مما قد يجعل النموذج أقل فعالية في الظروف غير المتوقعة أو الأحداث الخاصة. ثانياً، لم يتم التطرق بشكل كافٍ إلى تأثير العوامل الاجتماعية والاقتصادية على استخدام الدراجات. أخيرًا، يمكن تحسين النموذج من خلال دمج تقنيات تعلم الآلة الأكثر تقدمًا مثل الشبكات العصبية العميقة لتحسين دقة التنبؤ.
Questions related to the research
  1. ما هي المشكلة الرئيسية التي تحاول الورقة حلها؟

    تحاول الورقة حل مشكلة التوزيع غير المتوازن للدراجات بين المحطات المختلفة في نظام مشاركة الدراجات.

  2. ما هي الخوارزمية المستخدمة في النموذج المقترح؟

    يستخدم النموذج المقترح خوارزمية نشر التقارب على مستويين لتجميع محطات الدراجات في مجموعات.

  3. كيف تم التحقق من فعالية النموذج المقترح؟

    تم التحقق من فعالية النموذج باستخدام بيانات نظام مشاركة الدراجات في مدينة نيويورك، وتمت مقارنة النتائج مع عشرة طرق شائعة أخرى.

  4. ما هي النقاط التي يمكن تحسينها في النموذج المقترح؟

    يمكن تحسين النموذج من خلال دمج تقنيات تعلم الآلة الأكثر تقدمًا، والتطرق إلى تأثير العوامل الاجتماعية والاقتصادية، وتحسين فعاليته في الظروف غير المتوقعة أو الأحداث الخاصة.


References used
W. Jia, Y. Tan and J. Li, "Hierarchical Prediction Based on Two-Level Affinity Propagation Clustering for Bike-Sharing System," in IEEE Access, vol. 6, pp. 45875-45885, 2018.
rate research

Read More

Emotion cause extraction (ECE) aims to extract the causes behind the certain emotion in text. Some works related to the ECE task have been published and attracted lots of attention in recent years. However, these methods neglect two major issues: 1) pay few attentions to the effect of document-level context information on ECE, and 2) lack of sufficient exploration for how to effectively use the annotated emotion clause. For the first issue, we propose a bidirectional hierarchical attention network (BHA) corresponding to the specified candidate cause clause to capture the document-level context in a structured and dynamic manner. For the second issue, we design an emotional filtering module (EF) for each layer of the graph attention network, which calculates a gate score based on the emotion clause to filter the irrelevant information. Combining the BHA and EF, the EF-BHA can dynamically aggregate the contextual information from two directions and filters irrelevant information. The experimental results demonstrate that EF-BHA achieves the competitive performances on two public datasets in different languages (Chinese and English). Moreover, we quantify the effect of context on emotion cause extraction and provide the visualization of the interactions between candidate cause clauses and contexts.
Deep reinforcement learning provides a promising approach for text-based games in studying natural language communication between humans and artificial agents. However, the generalization still remains a big challenge as the agents depend critically on the complexity and variety of training tasks. In this paper, we address this problem by introducing a hierarchical framework built upon the knowledge graph-based RL agent. In the high level, a meta-policy is executed to decompose the whole game into a set of subtasks specified by textual goals, and select one of them based on the KG. Then a sub-policy in the low level is executed to conduct goal-conditioned reinforcement learning. We carry out experiments on games with various difficulty levels and show that the proposed method enjoys favorable generalizability.
This paper describes the systems submitted to IWSLT 2021 by the Volctrans team. We participate in the offline speech translation and text-to-text simultaneous translation tracks. For offline speech translation, our best end-to-end model achieves 7.9 BLEU improvements over the benchmark on the MuST-C test set and is even approaching the results of a strong cascade solution. For text-to-text simultaneous translation, we explore the best practice to optimize the wait-k model. As a result, our final submitted systems exceed the benchmark at around 7 BLEU on the same latency regime. We release our code and model to facilitate both future research works and industrial applications.
Conversations are often held in laboratories and companies. A summary is vital to grasp the content of a discussion for people who did not attend the discussion. If the summary is illustrated as an argument structure, it is helpful to grasp the discu ssion's essentials immediately. Our purpose in this paper is to predict a link structure between nodes that consist of utterances in a conversation: classification of each node pair into linked'' or not-linked.'' One approach to predict the structure is to utilize machine learning models. However, the result tends to over-generate links of nodes. To solve this problem, we introduce a two-step method to the structure prediction task. We utilize a machine learning-based approach as the first step: a link prediction task. Then, we apply a score-based approach as the second step: a link selection task. Our two-step methods dramatically improved the accuracy as compared with one-step methods based on SVM and BERT.
In Automated Claim Verification, we retrieve evidence from a knowledge base to determine the veracity of a claim. Intuitively, the retrieval of the correct evidence plays a crucial role in this process. Often, evidence selection is tackled as a pairw ise sentence classification task, i.e., we train a model to predict for each sentence individually whether it is evidence for a claim. In this work, we fine-tune document level transformers to extract all evidence from a Wikipedia document at once. We show that this approach performs better than a comparable model classifying sentences individually on all relevant evidence selection metrics in FEVER. Our complete pipeline building on this evidence selection procedure produces a new state-of-the-art result on FEVER, a popular claim verification benchmark.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا