ترغب بنشر مسار تعليمي؟ اضغط هنا

Identifying Barriers to Adoption for Rust through Online Discourse

94   0   0.0 ( 0 )
 نشر من قبل Will Crichton
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Rust is a low-level programming language known for its unique approach to memory-safe systems programming and for its steep learning curve. To understand what makes Rust difficult to adopt, we surveyed the top Reddit and Hacker News posts and comments about Rust; from these online discussions, we identified three hypotheses about Rusts barriers to adoption. We found that certain key features, idioms, and integration patterns were not easily accessible to new users.



قيم البحث

اقرأ أيضاً

As more and more face-to-face classes move to online environments, it becomes increasingly important to explore any emerging barriers to students learning. This work focuses on characterizing student barriers to active learning in synchronous online environments. The aim is to help novice educators develop a better understanding of those barriers and prepare more student-centered course plans for their active online classes. Towards this end, we adopt a qualitative research approach and study information from different sources: social media content, interviews, and surveys from students and expert educators. Through a thematic analysis, we craft a nuanced list of students online active learning barriers within the themes of human-side, technological, and environmental barriers. Each barrier is explored from the three aspects of frequency, importance, and exclusiveness to active online classes. Finally, we conduct a summative study with 12 novice educators and explain the benefits of using our barrier list for course planning in active online classes.
Online discourse takes place in corporate-controlled spaces thought by users to be public realms. These platforms in name enable free speech but in practice implement varying degrees of censorship either by government edict or by uneven and unseen co rporate policy. This kind of censorship has no countervailing accountability mechanism, and as such platform owners, moderators, and algorithms shape public discourse without recourse or transparency. Systems research has explored approaches to decentralizing or democratizing Internet infrastructure for decades. In parallel, the Internet censorship literature is replete with efforts to measure and overcome online censorship. However, in the course of designing specialized open-source platforms and tools, projects generally neglect the needs of supportive but uninvolved `average users. In this paper, we propose a pluralistic approach to democratizing online discourse that considers both the systems-related and user-facing issues as first-order design goals.
The enormous amount of discourse taking place online poses challenges to the functioning of a civil and informed public sphere. Efforts to standardize online discourse data, such as ClaimReview, are making available a wealth of new data about potenti ally inaccurate claims, reviewed by third-party fact-checkers. These data could help shed light on the nature of online discourse, the role of political elites in amplifying it, and its implications for the integrity of the online information ecosystem. Unfortunately, the semi-structured nature of much of this data presents significant challenges when it comes to modeling and reasoning about online discourse. A key challenge is relation extraction, which is the task of determining the semantic relationships between named entities in a claim. Here we develop a novel supervised learning method for relation extraction that combines graph embedding techniques with path traversal on semantic dependency graphs. Our approach is based on the intuitive observation that knowledge of the entities along the path between the subject and object of a triple (e.g. Washington,_D.C.}, and United_States_of_America) provides useful information that can be leveraged for extracting its semantic relation (i.e. capitalOf). As an example of a potential application of this technique for modeling online discourse, we show that our method can be integrated into a pipeline to reason about potential misinformation claims.
Writing high-performance image processing code is challenging and labor-intensive. The Halide programming language simplifies this task by decoupling high-level algorithms from schedules which optimize their implementation. However, even with this ab straction, it is still challenging for Halide programmers to understand complicated scheduling strategies and productively write valid, optimized schedules. To address this, we propose a programming support method called guided optimization. Guided optimization provides programmers a set of valid optimization options and interactive feedback about their current choices, which enables them to comprehend and efficiently optimize image processing code without the time-consuming trial-and-error process of traditional text editors. We implemented a proof-of-concept system, Roly-poly, which integrates guided optimization, program visualization, and schedule cost estimation to support the comprehension and development of efficient Halide image processing code. We conducted a user study with novice Halide programmers and confirmed that Roly-poly and its guided optimization was informative, increased productivity, and resulted in higher-performing schedules in less time.
Java is the go-to programming language choice for developing scalable enterprise cloud applications. In such systems, even a few percent CPU time savings can offer a significant competitive advantage and cost saving. Although performance tools abound in Java, those that focus on the data locality in the memory hierarchy are rare. In this paper, we present DJXPerf, a lightweight, object-centric memory profiler for Java, which associates memory-hierarchy performance metrics (e.g., cache/TLB misses) with Java objects. DJXPerf uses statistical sampling of hardware performance monitoring counters to attribute metrics to not only source code locations but also Java objects. DJXPerf presents Java object allocation contexts combined with their usage contexts and presents them ordered by the poor locality behaviors. DJXPerfs performance measurement, object attribution, and presentation techniques guide optimizing object allocation, layout, and access patterns. DJXPerf incurs only ~8% runtime overhead and ~5% memory overhead on average, requiring no modifications to hardware, OS, Java virtual machine, or application source code, which makes it attractive to use in production. Guided by DJXPerf, we study and optimize a number of Java and Scala programs, including well-known benchmarks and real-world applications, and demonstrate significant speedups.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا