ترغب بنشر مسار تعليمي؟ اضغط هنا

Big enterprise registration data imputation: Supporting spatiotemporal analysis of industries in China

88   0   0.0 ( 0 )
 نشر من قبل Zhipeng Gui
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Big, fine-grained enterprise registration data that includes time and location information enables us to quantitatively analyze, visualize, and understand the patterns of industries at multiple scales across time and space. However, data quality issues like incompleteness and ambiguity, hinder such analysis and application. These issues become more challenging when the volume of data is immense and constantly growing. High Performance Computing (HPC) frameworks can tackle big data computational issues, but few studies have systematically investigated imputation methods for enterprise registration data in this type of computing environment. In this paper, we propose a big data imputation workflow based on Apache Spark as well as a bare-metal computing cluster, to impute enterprise registration data. We integrated external data sources, employed Natural Language Processing (NLP), and compared several machine-learning methods to address incompleteness and ambiguity problems found in enterprise registration data. Experimental results illustrate the feasibility, efficiency, and scalability of the proposed HPC-based imputation framework, which also provides a reference for other big georeferenced text data processing. Using these imputation results, we visualize and briefly discuss the spatiotemporal distribution of industries in China, demonstrating the potential applications of such data when quality issues are resolved.



قيم البحث

اقرأ أيضاً

The growing popularity of e-scooters and their rapid expansion across urban streets has attracted widespread attention. A major policy question is whether e-scooters substitute existing mobility options or fill the service gaps left by them. This stu dy addresses this question by analyzing the spatiotemporal patterns of e-scooter service availability and use in Washington DC, focusing on their spatial relationships with public transit and bikesharing. Results from an analysis of three open big datasets suggest that e-scooters have both competing and complementary effects on transit and bikesharing services. The supply of e-scooters significantly overlaps with the service areas of transit and bikesharing, and we classify a majority of e-scooter trips as substitutes to transit and bikesharing uses. A travel-time-based analysis further reveals that when choosing e-scooters over transit, travelers pay a price premium and save some travel time. The price premium is greater during the COVID-19 pandemic but the associated travel-time savings are smaller. This implies that public health considerations rather than time-cost tradeoffs are the main driver for many to choose e-scooters over transit during COVID. In addition, we find that e-scooters complement bikesharing and transit by providing services to underserved neighborhoods. A sizeable proportion (about 10 percent) of e-scooter trips are taken to connect with the rail services. Future research may combine the big-data-based analysis presented here with traditional methods to further shed light on the interactions between e-scooter services, bikesharing, and public transit.
Spatiotemporal traffic time series (e.g., traffic volume/speed) collected from sensing systems are often incomplete with considerable corruption and large amounts of missing values, preventing users from harnessing the full power of the data. Missing data imputation has been a long-standing research topic and critical application for real-world intelligent transportation systems. A widely applied imputation method is low-rank matrix/tensor completion; however, the low-rank assumption only preserves the global structure while ignores the strong local consistency in spatiotemporal data. In this paper, we propose a low-rank autoregressive tensor completion (LATC) framework by introducing textit{temporal variation} as a new regularization term into the completion of a third-order (sensor $times$ time of day $times$ day) tensor. The third-order tensor structure allows us to better capture the global consistency of traffic data, such as the inherent seasonality and day-to-day similarity. To achieve local consistency, we design the temporal variation by imposing an AR($p$) model for each time series with coefficients as learnable parameters. Different from previous spatial and temporal regularization schemes, the minimization of temporal variation can better characterize temporal generative mechanisms beyond local smoothness, allowing us to deal with more challenging scenarios such blackout missing. To solve the optimization problem in LATC, we introduce an alternating minimization scheme that estimates the low-rank tensor and autoregressive coefficients iteratively. We conduct extensive numerical experiments on several real-world traffic data sets, and our results demonstrate the effectiveness of LATC in diverse missing scenarios.
Missing value problem in spatiotemporal traffic data has long been a challenging topic, in particular for large-scale and high-dimensional data with complex missing mechanisms and diverse degrees of missingness. Recent studies based on tensor nuclear norm have demonstrated the superiority of tensor learning in imputation tasks by effectively characterizing the complex correlations/dependencies in spatiotemporal data. However, despite the promising results, these approaches do not scale well to large data tensors. In this paper, we focus on addressing the missing data imputation problem for large-scale spatiotemporal traffic data. To achieve both high accuracy and efficiency, we develop a scalable tensor learning model -- Low-Tubal-Rank Smoothing Tensor Completion (LSTC-Tubal) -- based on the existing framework of Low-Rank Tensor Completion, which is well-suited for spatiotemporal traffic data that is characterized by multidimensional structure of location$times$ time of day $times$ day. In particular, the proposed LSTC-Tubal model involves a scalable tensor nuclear norm minimization scheme by integrating linear unitary transformation. Therefore, tensor nuclear norm minimization can be solved by singular value thresholding on the transformed matrix of each day while the day-to-day correlation can be effectively preserved by the unitary transform matrix. We compare LSTC-Tubal with state-of-the-art baseline models, and find that LSTC-Tubal can achieve competitive accuracy with a significantly lower computational cost. In addition, the LSTC-Tubal will also benefit other tasks in modeling large-scale spatiotemporal traffic data, such as network-level traffic forecasting.
181 - Qi Zhang , Ling Liu , Calton Pu 2018
Container technique is gaining increasing attention in recent years and has become an alternative to traditional virtual machines. Some of the primary motivations for the enterprise to adopt the container technology include its convenience to encapsu late and deploy applications, lightweight operations, as well as efficiency and flexibility in resources sharing. However, there still lacks an in-depth and systematic comparison study on how big data applications, such as Spark jobs, perform between a container environment and a virtual machine environment. In this paper, by running various Spark applications with different configurations, we evaluate the two environments from many interesting aspects, such as how convenient the execution environment can be set up, what are makespans of different workloads running in each setup, how efficient the hardware resources, such as CPU and memory, are utilized, and how well each environment can scale. The results show that compared with virtual machines, containers provide a more easy-to-deploy and scalable environment for big data workloads. The research work in this paper can help practitioners and researchers to make more informed decisions on tuning their cloud environment and configuring the big data applications, so as to achieve better performance and higher resources utilization.
For the architecture community, reasonable simulation time is a strong requirement in addition to performance data accuracy. However, emerging big data and AI workloads are too huge at binary size level and prohibitively expensive to run on cycle-acc urate simulators. The concept of data motif, which is identified as a class of units of computation performed on initial or intermediate data, is the first step towards building proxy benchmark to mimic the real-world big data and AI workloads. However, there is no practical way to construct a proxy benchmark based on the data motifs to help simulation-based research. In this paper, we embark on a study to bridge the gap between data motif and a practical proxy benchmark. We propose a data motif-based proxy benchmark generating methodology by means of machine learning method, which combine data motifs with different weights to mimic the big data and AI workloads. Furthermore, we implement various data motifs using light-weight stacks and apply the methodology to five real-world workloads to construct a suite of proxy benchmarks, considering the data types, patterns, and distributions. The evaluation results show that our proxy benchmarks shorten the execution time by 100s times on real systems while maintaining the average system and micro-architecture performance data accuracy above 90%, even changing the input data sets or cluster configurations. Moreover, the generated proxy benchmarks reflect consistent performance trends across different architectures. To facilitate the community, we will release the proxy benchmarks on the project homepage http://prof.ict.ac.cn/BigDataBench.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا