Do you want to publish a course? Click here

Critical Business Decision Making for Technology Startups -- A PerceptIn Case Study

70   0   0.0 ( 0 )
 Added by Shaoshan Liu
 Publication date 2020
and research's language is English
 Authors Shaoshan Liu




Ask ChatGPT about the research

Most business decisions are made with analysis, but some are judgment calls not susceptible to analysis due to time or information constraints. In this article, we present a real-life case study of critical business decision making of PerceptIn, an autonomous driving technology startup. In early years of PerceptIn, PerceptIn had to make a decision on the design of computing systems for its autonomous vehicle products. By providing details on PerceptIns decision process and the results of the decision, we hope to provide some insights that can be beneficial to entrepreneurs and engineering managers in technology startups.



rate research

Read More

How to attribute responsibility for autonomous artificial intelligence (AI) systems actions has been widely debated across the humanities and social science disciplines. This work presents two experiments ($N$=200 each) that measure peoples perceptions of eight different notions of moral responsibility concerning AI and human agents in the context of bail decision-making. Using real-life adapted vignettes, our experiments show that AI agents are held causally responsible and blamed similarly to human agents for an identical task. However, there was a meaningful difference in how people perceived these agents moral responsibility; human agents were ascribed to a higher degree of present-looking and forward-looking notions of responsibility than AI agents. We also found that people expect both AI and human decision-makers and advisors to justify their decisions regardless of their nature. We discuss policy and HCI implications of these findings, such as the need for explainable AI in high-stakes scenarios.
Technology is an extremely potent tool that can be leveraged for human development and social good. Owing to the great importance of environment and human psychology in driving human behavior, and the ubiquity of technology in modern life, there is a need to leverage the insights and capabilities of both fields together for nudging people towards a behavior that is optimal in some sense (personal or social). In this regard, the field of persuasive technology, which proposes to infuse technology with appropriate design and incentives using insights from psychology, behavioral economics, and human-computer interaction holds a lot of promise. Whilst persuasive technology is already being developed and is at play in many commercial applications, it can have the great social impact in the field of Information and Communication Technology for Development (ICTD) which uses Information and Communication Technology (ICT) for human developmental ends such as education and health. In this paper we will explore what persuasive technology is and how it can be used for the ends of human development. To develop the ideas in a concrete setting, we present a case study outlining how persuasive technology can be used for human development in Pakistan, a developing South Asian country, that suffers from many of the problems that plague typical developing country.
Using the concept of principal stratification from the causal inference literature, we introduce a new notion of fairness, called principal fairness, for human and algorithmic decision-making. The key idea is that one should not discriminate among individuals who would be similarly affected by the decision. Unlike the existing statistical definitions of fairness, principal fairness explicitly accounts for the fact that individuals can be impacted by the decision. We propose an axiomatic assumption that all groups are created equal. This assumption is motivated by a belief that protected attributes such as race and gender should have no direct causal effects on potential outcomes. Under this assumption, we show that principal fairness implies all three existing statistical fairness criteria once we account for relevant covariates. This result also highlights the essential role of conditioning covariates in resolving the previously recognized tradeoffs between the existing statistical fairness criteria. Finally, we discuss how to empirically choose conditioning covariates and then evaluate the principal fairness of a particular decision.
213 - Bo Yu , Jie Tang , Shaoshan Liu 2020
PerceptIn develops and commercializes autonomous vehicles for micromobility around the globe. This paper makes a holistic summary of PerceptIns development and operating experiences. This paper provides the business tale behind our product, and presents the development of the computing system for our vehicles. We illustrate the design decision made for the computing system, and show the advantage of offloading localization workloads onto an FPGA platform.
Society increasingly relies on machine learning models for automated decision making. Yet, efficiency gains from automation have come paired with concern for algorithmic discrimination that can systematize inequality. Recent work has proposed optimal post-processing methods that randomize classification decisions for a fraction of individuals, in order to achieve fairness measures related to parity in errors and calibration. These methods, however, have raised concern due to the information inefficiency, intra-group unfairness, and Pareto sub-optimality they entail. The present work proposes an alternative active framework for fair classification, where, in deployment, a decision-maker adaptively acquires information according to the needs of different groups or individuals, towards balancing disparities in classification performance. We propose two such methods, where information collection is adapted to group- and individual-level needs respectively. We show on real-world datasets that these can achieve: 1) calibration and single error parity (e.g., equal opportunity); and 2) parity in both false positive and false negative rates (i.e., equal odds). Moreover, we show that by leveraging their additional degree of freedom, active approaches can substantially outperform randomization-based classifiers previously considered optimal, while avoiding limitations such as intra-group unfairness.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا