ترغب بنشر مسار تعليمي؟ اضغط هنا

Quantifying Intrinsic Value of Information of Trajectories

190   0   0.0 ( 0 )
 نشر من قبل Kien Nguyen
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

A trajectory, defined as a sequence of location measurements, contains valuable information about movements of an individual. Its value of information (VOI) may change depending on the specific application. However, in a variety of applications, knowing the intrinsic VOI of a trajectory is important to guide other subsequent tasks or decisions. This work aims to find a principled framework to quantify the intrinsic VOI of trajectories from the owners perspective. This is a challenging problem because an appropriate framework needs to take into account various characteristics of the trajectory, prior knowledge, and different types of trajectory degradation. We propose a framework based on information gain (IG) as a principled approach to solve this problem. Our IG framework transforms a trajectory with discrete-time measurements to a canonical representation, i.e., continuous in time with continuous mean and variance estimates, and then quantifies the reduction of uncertainty about the locations of the owner over a period of time as the VOI of the trajectory. Qualitative and extensive quantitative evaluation show that the IG framework is capable of effectively capturing important characteristics contributing to the VOI of trajectories.



قيم البحث

اقرأ أيضاً

134 - Xiao Xue , Deyu Zhou , Yaodan Guo 2020
With the development of cloud computing, service computing, IoT(Internet of Things) and mobile Internet, the diversity and sociality of services are increasingly apparent. To meet the customized user demands, Service Ecosystem is emerging as a comple x social-technology system, which is formed with various IT services through cross-border integration. However, how to analyze and promote the evolution mechanism of service ecosystem is still a serious challenge in the field, which is of great significance to achieve the expected system evolution trends. Based on this, this paper proposes a value-driven analysis framework of service ecosystem, including value creation, value operation, value realization and value distribution. In addition, a computational experiment system is established to verify the effectiveness of the analysis framework, which stimulates the effect of different operation strategies on the value network in the service ecosystem. The result shows that our analysis framework can provide new means and ideas for the analysis of service ecosystem evolution, and can also support the design of operation strategies. Index
With the development of cloud computing, service computing, IoT(Internet of Things) and mobile Internet, the diversity and sociality of services are increasingly apparent. To meet the customized user demands, service ecosystems begins to emerge with the formation of various IT services collaboration network. However, service ecosystem is a complex social-technology system with the characteristics of natural ecosystems, economic systems and complex networks. Hence, how to realize the multi-dimensional evaluation of service ecosystem is of great significance to promote its sound development. Based on this, this paper proposes a value entropy model to analyze the performance of service ecosystem, which is conducive to integrate evaluation indicators of different dimensions. In addition, a computational experiment system is constructed to verify the effectiveness of value entropy model. The result shows that our model can provide new means and ideas for the analysis of service ecosystem.
We present a simple method to efficiently compute a lower limit of the topological entropy and its spatial distribution for two-dimensional mappings. These mappings could represent either two-dimensional time-periodic fluid flows or three-dimensional magnetic fields, which are periodic in one direction. This method is based on measuring the length of a material line in the flow. Depending on the nature of the flow, the fluid can be mixed very efficiently which causes the line to stretch. Here we study a method that adaptively increases the resolution at locations along the line where folds lead to high curvature. This reduces the computational cost greatly which allows us to study unprecedented parameter regimes. We demonstrate how this efficient implementation allows the computation of the variation of the finite-time topological entropy in the mapping. This measure quantifies spatial variations of the braiding efficiency, important in many practical applications.
Intracity heavy truck freight trips are basic data in city freight system planning and management. In the big data era, massive heavy truck GPS trajectories can be acquired cost effectively in real-time. Identifying freight trip ends (origins and des tinations) from heavy truck GPS trajectories is an outstanding problem. Although previous studies proposed a variety of trip end identification methods from different perspectives, these studies subjectively defined key threshold parameters and ignored the complex intracity heavy truck travel characteristics. Here, we propose a data-driven trip end identification method in which the speed threshold for identifying truck stops and the multilevel time thresholds for distinguishing temporary stops and freight trip ends are objectively defined. Moreover, an appropriate time threshold level is dynamically selected by considering the intracity activity patterns of heavy trucks. Furthermore, we use urban road networks and point-of-interest (POI) data to eliminate misidentified trip ends to improve method accuracy. The validation results show that the accuracy of the method we propose is 87.45%. Our method incorporates the impact of the city freight context on truck trajectory characteristics, and its results can reflect the spatial distribution and chain patterns of intracity heavy truck freight trips, which have a wide range of practical applications.
We address the problem of how to optimally schedule data packets over an unreliable channel in order to minimize the estimation error of a simple-to-implement remote linear estimator using a constant Kalman gain to track the state of a Gauss Markov p rocess. The remote estimator receives time-stamped data packets which contain noisy observations of the process. Additionally, they also contain the information about the quality of the sensor source, i.e., the variance of the observation noise that was used to generate the packet. In order to minimize the estimation error, the scheduler needs to use both while prioritizing packet transmissions. It is shown that a simple index rule that calculates the value of information (VoI) of each packet, and then schedules the packet with the largest current value of VoI, is optimal. The VoI of a packet decreases with its age, and increases with the precision of the source. Thus, we conclude that, for constant filter gains, a policy which minimizes the age of information does not necessarily maximize the estimator performance.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا