ترغب بنشر مسار تعليمي؟ اضغط هنا

Exploring Server-side Blocking of Regions

341   0   0.0 ( 0 )
 نشر من قبل Michael Tschantz
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

One of the Internets greatest strengths is the degree to which it facilitates access to any of its resources from users anywhere in the world. However, users in the developing world have complained of websites blocking their countries. We explore this phenomenon using a measurement study. With a combination of automated page loads, manual checking, and traceroutes, we can say, with high confidence, that some websites do block users from some regions. We cannot say, with high confidence, why, or even based on what criteria, they do so except for in some cases where the website states a reason. We do report qualitative evidence that fears of abuse and the costs of serving requests to some regions may play a role.


قيم البحث

اقرأ أيضاً

We study how to design edge server placement and server scheduling policies under workload uncertainty for 5G networks. We introduce a new metric called resource pooling factor to handle unexpected workload bursts. Maximizing this metric offers a str ong enhancement on top of robust optimization against workload uncertainty. Using both real traces and synthetic traces, we show that the proposed server placement and server scheduling policies not only demonstrate better robustness against workload uncertainty than existing approaches, but also significantly reduce the cost of service providers. Specifically, in order to achieve close-to-zero workload rejection rate, the proposed server placement policy reduces the number of required edge servers by about 25% compared with the state-of-the-art approach; the proposed server scheduling policy reduces the energy consumption of edge servers by about 13% without causing much impact on the service quality.
Popular dispatching policies such as the join shortest queue (JSQ), join smallest work (JSW) and their power of two variants are used in load balancing systems where the instantaneous queue length or workload information at all queues or a subset of them can be queried. In situations where the dispatcher has an associated memory, one can minimize this query overhead by maintaining a list of idle servers to which jobs can be dispatched. Recent alternative approaches that do not require querying such information include the cancel on start and cancel on complete based replication policies. The downside of such policies however is that the servers must communicate the start or completion of each service to the dispatcher and must allow cancellation of redundant copies. In this work, we consider a load balancing environment where the dispatcher cannot query load information, does not have a memory, and cannot cancel any replica that it may have created. In such a rigid environment, we allow the dispatcher to possibly append a server side cancellation criteria to each job or its replica. A job or a replica is served only if it satisfies the predefined criteria at the time of service. We focus on a criteria that is based on the waiting time experienced by a job or its replica and analyze several variants of this policy based on the assumption of asymptotic independence of queues. The proposed policies are novel and perform remarkably well in spite of the rigid operating constraints.
The performance of large-scale distributed compute systems is adversely impacted by stragglers when the execution time of a job is uncertain. To manage stragglers, we consider a multi-fork approach for job scheduling, where additional parallel server s are added at forking instants. In terms of the forking instants and the number of additional servers, we compute the job completion time and the cost of server utilization when the task processing times are assumed to have a shifted exponential distribution. We use this study to provide insights into the scheduling design of the forking instants and the associated number of additional servers to be started. Numerical results demonstrate orders of magnitude improvement in cost in the regime of low completion times as compared to the prior works.
GPS signals, the main origin of navigation, are not functional in indoor environments. Therefore, Wi-Fi access points have started to be increasingly used for localization and tracking inside the buildings by relying on a fingerprint-based approach. However, with these types of approaches, several concerns regarding the privacy of the users have arisen. Malicious individuals can determine a clients daily habits and activities by simply analyzing their wireless signals. While there are already efforts to incorporate privacy into the existing fingerprint-based approaches, they are limited to the characteristics of the homomorphic cryptographic schemes they employed. In this paper, we propose to enhance the performance of these approaches by exploiting another homomorphic algorithm, namely DGK, with its unique encrypted sorting capability and thus pushing most of the computations to the server side. We developed an Android app and tested our system within a Columbia University dormitory. Compared to existing systems, the results indicated that more power savings can be achieved at the client side and DGK can be a viable option with more powerful server computation capabilities.
To keep up with demand, servers will scale up to handle hundreds of thousands of clients simultaneously. Much of the focus of the community has been on scaling servers in terms of aggregate traffic intensity (packets transmitted per second). However, bottlenecks caused by the increasing number of concurrent clients, resulting in a large number of concurrent flows, have received little attention. In this work, we focus on identifying such bottlenecks. In particular, we define two broad categories of problems; namely, admitting more packets into the network stack than can be handled efficiently, and increasing per-packet overhead within the stack. We show that these problems contribute to high CPU usage and network performance degradation in terms of aggregate throughput and RTT. Our measurement and analysis are performed in the context of the Linux networking stack, the the most widely used publicly available networking stack. Further, we discuss the relevance of our findings to other network stacks. The goal of our work is to highlight considerations required in the design of future networking stacks to enable efficient handling of large numbers of clients and flows.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا