ترغب بنشر مسار تعليمي؟ اضغط هنا

No Free Lunch: Microservice Practices Reconsidered in Industry

128   0   0.0 ( 0 )
 نشر من قبل Qilin Xiang
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Microservice architecture advocates a number of technologies and practices such as lightweight container, container orchestration, and DevOps, with the promised benefits of faster delivery, improved scalability, and greater autonomy. However, microservice systems implemented in industry vary a lot in terms of adopted practices and achieved benefits, drastically different from what is advocated in the literature. In this article, we conduct an empirical study, including an online survey with 51 responses and 14 interviews for experienced microservice experts to advance our understanding regarding to microservice practices in industry. As a part of our findings, the empirical study clearly revealed three levels of maturity of microservice systems (from basic to advanced): independent development and deployment, high scalability and availability, and service ecosystem, categorized by the fulfilled benefits of microservices. We also identify 11 practical issues that constrain the microservice capabilities of organizations. For each issue, we summarize the practices that have been explored and adopted in industry, along with the remaining challenges. Our study can help practitioners better position their microservice systems and determine what infrastructures and capabilities are worth investing. Our study can also help researchers better understand industrial microservice practices and identify useful research problems.



قيم البحث

اقرأ أيضاً

In this work we present an account of the status of requirements engineering in the gaming industry. Recent papers in the area were surveyed. Characterizations of the gaming industry were deliberated upon by portraying its relations with the market i ndustry. Some research directions in the area of requirements engineering in the gaming industry were also mentioned.
The ultimate limits for the quantum machine learning of quantum data are investigated by obtaining a generalisation of the celebrated No Free Lunch (NFL) theorem. We find a lower bound on the quantum risk (the probability that a trained hypothesis is incorrect when presented with a random input) of a quantum learning algorithm trained via pairs of input and output states when averaged over training pairs and unitaries. The bound is illustrated using a recently introduced QNN architecture.
A precondition for a No Free Lunch theorem is evaluation with a loss function which does not assume a priori superiority of some outputs over others. A previous result for community detection by Peel et al. (2017) relies on a mismatch between the los s function and the problem domain. The loss function computes an expectation over only a subset of the universe of possible outputs; thus, it is only asymptotically appropriate with respect to the problem size. By using the correct random model for the problem domain, we provide a stronger, exact No Free Lunch theorem for community detection. The claim generalizes to other set-partitioning tasks including core/periphery separation, $k$-clustering, and graph partitioning. Finally, we review the literature of proposed evaluation functions and identify functions which (perhaps with slight modifications) are compatible with an exact No Free Lunch theorem.
80 - David H. Wolpert 2020
The No Free Lunch theorems prove that under a uniform distribution over induction problems (search problems or learning problems), all induction algorithms perform equally. As I discuss in this chapter, the importance of the theorems arises by using them to analyze scenarios involving {non-uniform} distributions, and to compare different algorithms, without any assumption about the distribution over problems at all. In particular, the theorems prove that {anti}-cross-validation (choosing among a set of candidate algorithms based on which has {worst} out-of-sample behavior) performs as well as cross-validation, unless one makes an assumption -- which has never been formalized -- about how the distribution over induction problems, on the one hand, is related to the set of algorithms one is choosing among using (anti-)cross validation, on the other. In addition, they establish strong caveats concerning the significance of the many results in the literature which establish the strength of a particular algorithm without assuming a particular distribution. They also motivate a ``dictionary between supervised learning and improve blackbox optimization, which allows one to ``translate techniques from supervised learning into the domain of blackbox optimization, thereby strengthening blackbox optimization algorithms. In addition to these topics, I also briefly discuss their implications for philosophy of science.
94 - David H. Wolpert 2021
The important recent book by G. Schurz appreciates that the no-free-lunch theorems (NFL) have major implications for the problem of (meta) induction. Here I review the NFL theorems, emphasizing that they do not only concern the case where there is a uniform prior -- they prove that there are as many priors (loosely speaking) for which any induction algorithm $A$ out-generalizes some induction algorithm $B$ as vice-versa. Importantly though, in addition to the NFL theorems, there are many textit{free lunch} theorems. In particular, the NFL theorems can only be used to compare the textit{marginal} expected performance of an induction algorithm $A$ with the marginal expected performance of an induction algorithm $B$. There is a rich set of free lunches which instead concern the statistical correlations among the generalization errors of induction algorithms. As I describe, the meta-induction algorithms that Schurz advocate as a solution to Humes problem are just an example of such a free lunch based on correlations among the generalization errors of induction algorithms. I end by pointing out that the prior that Schurz advocates, which is uniform over bit frequencies rather than bit patterns, is contradicted by thousands of experiments in statistical physics and by the great success of the maximum entropy procedure in inductive inference.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا