ترغب بنشر مسار تعليمي؟ اضغط هنا

Computation Resource Allocation Solution in Recommender Systems

67   0   0.0 ( 0 )
 نشر من قبل Xun Yang
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

Recommender systems rely heavily on increasing computation resources to improve their business goal. By deploying computation-intensive models and algorithms, these systems are able to inference user interests and exhibit certain ads or commodities from the candidate set to maximize their business goals. However, such systems are facing two challenges in achieving their goals. On the one hand, facing massive online requests, computation-intensive models and algorithms are pushing their computation resources to the limit. On the other hand, the response time of these systems is strictly limited to a short period, e.g. 300 milliseconds in our real system, which is also being exhausted by the increasingly complex models and algorithms. In this paper, we propose the computation resource allocation solution (CRAS) that maximizes the business goal with limited computation resources and response time. We comprehensively illustrate the problem and formulate such a problem as an optimization problem with multiple constraints, which could be broken down into independent sub-problems. To solve the sub-problems, we propose the revenue function to facilitate the theoretical analysis, and obtain the optimal computation resource allocation strategy. To address the applicability issues, we devise the feedback control system to help our strategy constantly adapt to the changing online environment. The effectiveness of our method is verified by extensive experiments based on the real dataset from Taobao.com. We also deploy our method in the display advertising system of Alibaba. The online results show that our computation resource allocation solution achieves significant business goal improvement without any increment of computation cost, which demonstrates the efficacy of our method in real industrial practice.



قيم البحث

اقرأ أيضاً

One of the key features of this paper is that the agents opinion of a social network is assumed to be not only influenced by the other agents but also by two marketers in competition. One of our contributions is to propose a pragmatic game-theoretica l formulation of the problem and to conduct the complete corresponding equilibrium analysis (existence, uniqueness, dynamic characterization, and determination). Our analysis provides practical insights to know how a marketer should exploit its knowledge about the social network to allocate its marketing or advertising budget among the agents (who are the consumers). By providing relevant definitions for the agent influence power (AIP) and the gain of targeting (GoT), the benefit of using a smart budget allocation policy instead of a uniform one is assessed and operating conditions under which it is potentially high are identified.
During the last two years, the goal of many researchers has been to squeeze the last bit of performance out of HPC system for AI tasks. Often this discussion is held in the context of how fast ResNet50 can be trained. Unfortunately, ResNet50 is no lo nger a representative workload in 2020. Thus, we focus on Recommender Systems which account for most of the AI cycles in cloud computing centers. More specifically, we focus on Facebooks DLRM benchmark. By enabling it to run on latest CPU hardware and software tailored for HPC, we are able to achieve more than two-orders of magnitude improvement in performance (110x) on a single socket compared to the reference CPU implementation, and high scaling efficiency up to 64 sockets, while fitting ultra-large datasets. This paper discusses the optimization techniques for the various operators in DLRM and which component of the systems are stressed by these different operators. The presented techniques are applicable to a broader set of DL workloads that pose the same scaling challenges/characteristics as DLRM.
114 - Chi Ho Yeung 2015
Recommender systems are present in many web applications to guide our choices. They increase sales and benefit sellers, but whether they benefit customers by providing relevant products is questionable. Here we introduce a model to examine the benefi t of recommender systems for users, and found that recommendations from the system can be equivalent to random draws if one relies too strongly on the system. Nevertheless, with sufficient information about user preferences, recommendations become accurate and an abrupt transition to this accurate regime is observed for some algorithms. On the other hand, we found that a high accuracy evaluated by common accuracy metrics does not necessarily correspond to a high real accuracy nor a benefit for users, which serves as an alarm for operators and researchers of recommender systems. We tested our model with a real dataset and observed similar behaviors. Finally, a recommendation approach with improved accuracy is suggested. These results imply that recommender systems can benefit users, but relying too strongly on the system may render the system ineffective.
85 - Han Hu , Weiwei Song , Qun Wang 2021
Mobile edge computing (MEC)-enabled Internet of Things (IoT) networks have been deemed a promising paradigm to support massive energy-constrained and computation-limited IoT devices. IoT with mobility has found tremendous new services in the 5G era a nd the forthcoming 6G eras such as autonomous driving and vehicular communications. However, mobility of IoT devices has not been studied in the sufficient level in the existing works. In this paper, the offloading decision and resource allocation problem is studied with mobility consideration. The long-term average sum service cost of all the mobile IoT devices (MIDs) is minimized by jointly optimizing the CPU-cycle frequencies, the transmit power, and the user association vector of MIDs. An online mobility-aware offloading and resource allocation (OMORA) algorithm is proposed based on Lyapunov optimization and Semi-Definite Programming (SDP). Simulation results demonstrate that our proposed scheme can balance the system service cost and the delay performance, and outperforms other offloading benchmark methods in terms of the system service cost.
82 - Siyi Liu , Chen Gao , Yihong Chen 2021
The embedding-based representation learning is commonly used in deep learning recommendation models to map the raw sparse features to dense vectors. The traditional embedding manner that assigns a uniform size to all features has two issues. First, t he numerous features inevitably lead to a gigantic embedding table that causes a high memory usage cost. Second, it is likely to cause the over-fitting problem for those features that do not require too large representation capacity. Existing works that try to address the problem always cause a significant drop in recommendation performance or suffers from the limitation of unaffordable training time cost. In this paper, we proposed a novel approach, named PEP (short for Plug-in Embedding Pruning), to reduce the size of the embedding table while avoiding the drop of recommendation accuracy. PEP prunes embedding parameter where the pruning threshold(s) can be adaptively learned from data. Therefore we can automatically obtain a mixed-dimension embedding-scheme by pruning redundant parameters for each feature. PEP is a general framework that can plug in various base recommendation models. Extensive experiments demonstrate it can efficiently cut down embedding parameters and boost the base models performance. Specifically, it achieves strong recommendation performance while reducing 97-99% parameters. As for the computation cost, PEP only brings an additional 20-30% time cost compared with base models. Codes are available at https://github.com/ssui-liu/learnable-embed-sizes-for-RecSys.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا