No Arabic abstract
The debate on Net-neutrality and events pointing towards its possible violations have led to the development of tools to detect deliberate traffic discrimination on the Internet. Given the complex nature of the Internet, neutrality violations are not easy to detect, and tools developed so far suffer from various limitations. In this paper, we study many challenges in detecting the violations and discuss possible approaches to mitigate them. As a case study, we focus on the tool Wehe cite{wehe} and discuss its limitations and propose the aspects that need to be strengthened. Wehe is the most recent tool to detect neutrality violations. Despite Wehes vast utility and possible influences over policy decisions, its mechanisms are not yet fully validated by researchers other than original tool developers. We seek to fill this gap by conducting a thorough and in-depth validation of Wehe. Our validation uses the Wehe App, a client-server setup mimicking Wehes behavior and its theoretical arguments. We validated the Wehe app for its methodology, traffic discrimination detection, and operational environments. We found that the critical weaknesses of the Wehe App are due to its design choices of using port number 80, overlooking the effect of background traffic, and the direct performance comparison.
The development of AI applications is a multidisciplinary effort, involving multiple roles collaborating with the AI developers, an umbrella term we use to include data scientists and other AI-adjacent roles on the same team. During these collaborations, there is a knowledge mismatch between AI developers, who are skilled in data science, and external stakeholders who are typically not. This difference leads to communication gaps, and the onus falls on AI developers to explain data science concepts to their collaborators. In this paper, we report on a study including analyses of both interviews with AI developers and artifacts they produced for communication. Using the analytic lens of shared mental models, we report on the types of communication gaps that AI developers face, how AI developers communicate across disciplinary and organizational boundaries, and how they simultaneously manage issues regarding trust and expectations.
By all measures, wireless networking has seen explosive growth over the past decade. Fourth Generation Long Term Evolution (4G LTE) cellular technology has increased the bandwidth available for smartphones, in essence, delivering broadband speeds to mobile devices. The most recent 5G technology is further enhancing the transmission speeds and cell capacity, as well as, reducing latency through the use of different radio technologies and is expected to provide Internet connections that are an order of magnitude faster than 4G LTE. Technology continues to advance rapidly, however, and the next generation, 6G, is already being envisioned. 6G will make possible a wide range of powerful, new applications including holographic telepresence, telehealth, remote education, ubiquitous robotics and autonomous vehicles, smart cities and communities (IoT), and advanced manufacturing (Industry 4.0, sometimes referred to as the Fourth Industrial Revolution), to name but a few. The advances we will see begin at the hardware level and extend all the way to the top of the software stack. Artificial Intelligence (AI) will also start playing a greater role in the development and management of wireless networking infrastructure by becoming embedded in applications throughout all levels of the network. The resulting benefits to society will be enormous. At the same time these exciting new wireless capabilities are appearing rapidly on the horizon, a broad range of research challenges loom ahead. These stem from the ever-increasing complexity of the hardware and software systems, along with the need to provide infrastructure that is robust and secure while simultaneously protecting the privacy of users. Here we outline some of those challenges and provide recommendations for the research that needs to be done to address them.
An independent ethical assessment of an artificial intelligence system is an impartial examination of the systems development, deployment, and use in alignment with ethical values. System-level qualitative frameworks that describe high-level requirements and component-level quantitative metrics that measure individual ethical dimensions have been developed over the past few years. However, there exists a gap between the two, which hinders the execution of independent ethical assessments in practice. This study bridges this gap and designs a holistic independent ethical assessment process for a text classification model with a special focus on the task of hate speech detection. The assessment is further augmented with protected attributes mining and counterfactual-based analysis to enhance bias assessment. It covers assessments of technical performance, data bias, embedding bias, classification bias, and interpretability. The proposed process is demonstrated through an assessment of a deep hate speech detection model.
In recent years, Header Bidding (HB) has gained popularity among web publishers, challenging the status quo in the ad ecosystem. Contrary to the traditional waterfall standard, HB aims to give back to publishers control of their ad inventory, increase transparency, fairness and competition among advertisers, resulting in higher ad-slot prices. Although promising, little is known about how this ad protocol works: What are HBs possible implementations, who are the major players, and what is its network and UX overhead? To address these questions, we design and implement HBDetector: a novel methodology to detect HB auctions on a website at real time. By crawling 35,000 top Alexa websites, we collect and analyze a dataset of 800k auctions. We find that: (i) 14.28% of top websites utilize HB. (ii) Publishers prefer to collaborate with a few Demand Partners who also dominate the waterfall market. (iii) HB latency can be significantly higher (up to 3x in median case) than waterfall.
In recent years, AI generated art has become very popular. From generating art works in the style of famous artists like Paul Cezanne and Claude Monet to simulating styles of art movements like Ukiyo-e, a variety of creative applications have been explored using AI. Looking from an art historical perspective, these applications raise some ethical questions. Can AI model artists styles without stereotyping them? Does AI do justice to the socio-cultural nuances of art movements? In this work, we take a first step towards analyzing these issues. Leveraging directed acyclic graphs to represent potential process of art creation, we propose a simple metric to quantify confounding bias due to the lack of modeling the influence of art movements in learning artists styles. As a case study, we consider the popular cycleGAN model and analyze confounding bias across various genres. The proposed metric is more effective than state-of-the-art outlier detection method in understanding the influence of art movements in artworks. We hope our work will elucidate important shortcomings of computationally modeling artists styles and trigger discussions related to accountability of AI generated art.