No Arabic abstract
Indian voters from Kashmir to Kanyakumari select their representatives to form their parliament by going to polls. Indias election is one of the largest democratic exercise in the world history. About 850 million eligible voters determine which political party or alliance will form the government and in turn, will serve as prime minister. Given the electoral rules of placing a polling place within 2 kilometers of every habitation, it comes as no surprise that is indeed a humongous task for the Election Commission of India (ECI). It sends around 11 million election workers through tough terrains to reach the last mile. This exercise also comes as ever growing expenditure for the ECI. This paper proposes the use of Automated Teller Machines (ATM) and Point Of Sale (POS) machines to be used to cover as much as urban, rural and semi-urban places possible given the wide network of National Financial Switch (NFS) and increase in connectivity through Digital India initiative. This would add to the use of the existing infrastructure to accommodate a free, fair and transparent election.
Conventional algorithmic fairness is Western in its sub-groups, values, and optimizations. In this paper, we ask how portable the assumptions of this largely Western take on algorithmic fairness are to a different geo-cultural context such as India. Based on 36 expert interviews with Indian scholars, and an analysis of emerging algorithmic deployments in India, we identify three clusters of challenges that engulf the large distance between machine learning models and oppressed communities in India. We argue that a mere translation of technical fairness work to Indian subgroups may serve only as a window dressing, and instead, call for a collective re-imagining of Fair-ML, by re-contextualising data and models, empowering oppressed communities, and more importantly, enabling ecosystems.
Conventional algorithmic fairness is West-centric, as seen in its sub-groups, values, and methods. In this paper, we de-center algorithmic fairness and analyse AI power in India. Based on 36 qualitative interviews and a discourse analysis of algorithmic deployments in India, we find that several assumptions of algorithmic fairness are challenged. We find that in India, data is not always reliable due to socio-economic factors, ML makers appear to follow double standards, and AI evokes unquestioning aspiration. We contend that localising model fairness alone can be window dressing in India, where the distance between models and oppressed communities is large. Instead, we re-imagine algorithmic fairness in India and provide a roadmap to re-contextualise data and models, empower oppressed communities, and enable Fair-ML ecosystems.
The study examines the relationship between mobile financial services and individual financial behavior in India wherein a sizeable population is yet to be financially included. Addressing the endogeneity associated with the use of mobile financial services using an instrumental variable method, the study finds that the use of mobile financial services increases the likelihood of investment, having insurance and borrowing from formal financial institutions. Further, the analysis highlights that access to mobile financial services have the potential to bridge the gender divide in financial inclusion. Fastening the pace of access to mobile financial services may partially alter pandemic induced poverty.
Health is a very important prerequisite in peoples well-being and happiness. Several studies were more focused on presenting the occurrence on specific disease like forecasting the number of dengue and malaria cases. This paper utilized the time series data for trend analysis and data forecasting using ARIMA model to visualize the trends of health data on the ten leading causes of deaths, leading cause of morbidity and leading cause of infants deaths particularly in the Philippines presented in a tabular data. Figures for each disease trend are presented individually with the use of the GRETL software. Forecasting results of the leading causes of death showed that Diseases of the heart, vascular system, accidents, Chronic lower respiratory diseases and Chronic Tuberculosis (all forms) showed a slight changed of the forecasted data, Malignant neoplasms showed unstable behavior of the forecasted data, and Pneumonia, diabetes mellitus, Nephritis, nephrotic syndrome and nephrosis and certain conditions originating in perinatal showed a decreasing patterns based on the forecasted data.
Recent advances in artificial intelligence (AI) have lead to an explosion of multimedia applications (e.g., computer vision (CV) and natural language processing (NLP)) for different domains such as commercial, industrial, and intelligence. In particular, the use of AI applications in a national security environment is often problematic because the opaque nature of the systems leads to an inability for a human to understand how the results came about. A reliance on black boxes to generate predictions and inform decisions is potentially disastrous. This paper explores how the application of standards during each stage of the development of an AI system deployed and used in a national security environment would help enable trust. Specifically, we focus on the standards outlined in Intelligence Community Directive 203 (Analytic Standards) to subject machine outputs to the same rigorous standards as analysis performed by humans.