No Arabic abstract
Do algorithms for drawing graphs pass the Turing Test? That is, are their outputs indistinguishable from graphs drawn by humans? We address this question through a human-centred experiment, focusing on `small graphs, of a size for which it would be reasonable for someone to choose to draw the graph manually. Overall, we find that hand-drawn layouts can be distinguished from those generated by graph drawing algorithms, although this is not always the case for graphs drawn by force-directed or multi-dimensional scaling algorithms, making these good candidates for Turing Test success. We show that, in general, hand-drawn graphs are judged to be of higher quality than automatically generated ones, although this result varies with graph size and algorithm.
Graph models, like other machine learning models, have implicit and explicit biases built-in, which often impact performance in nontrivial ways. The models faithfulness is often measured by comparing the newly generated graph against the source graph using any number or combination of graph properties. Differences in the size or topology of the generated graph therefore indicate a loss in the model. Yet, in many systems, errors encoded in loss functions are subtle and not well understood. In the present work, we introduce the Infinity Mirror test for analyzing the robustness of graph models. This straightforward stress test works by repeatedly fitting a model to its own outputs. A hypothetically perfect graph model would have no deviation from the source graph; however, the models implicit biases and assumptions are exaggerated by the Infinity Mirror test, exposing potential issues that were previously obscured. Through an analysis of thousands of experiments on synthetic and real-world graphs, we show that several conventional graph models degenerate in exciting and informative ways. We believe that the observed degenerative patterns are clues to the future development of better graph models.
The ACM A.M. Turing Award is commonly acknowledged as the highest distinction in the realm of computer science. Since 1960s, it has been awarded to computer scientists who made outstanding contributions. The significance of this award is far-reaching to the laureates as well as their research teams. However, unlike the Nobel Prize that has been extensively investigated, little research has been done to explore this most important award. To this end, we propose the Turing Number (TN) index to measure how far a specific scholar is to this award. Inspired by previous works on Erdos Number and Bacon Number, this index is defined as the shortest path between a given scholar to any Turing Award Laureate. Experimental results suggest that TN can reflect the closeness of collaboration between scholars and Turing Award Laureates. With the correlation analysis between TN and metrics from the bibliometric-level and network-level, we demonstrate that TN has the potential of reflecting a scholars academic influence and reputation.
Reading comprehension is an important ability of human intelligence. Literacy and numeracy are two most essential foundation for people to succeed at study, at work and in life. Reading comprehension ability is a core component of literacy. In most of the education systems, developing reading comprehension ability is compulsory in the curriculum from year one to year 12. It is an indispensable ability in the dissemination of knowledge. With the emerging artificial intelligence, computers start to be able to read and understand like people in some context. They can even read better than human beings for some tasks, but have little clue in other tasks. It will be very beneficial if we can identify the levels of machine comprehension ability, which will direct us on the further improvement. Turing test is a well-known test of the difference between computer intelligence and human intelligence. In order to be able to compare the difference between people reading and machines reading, we proposed a test called (reading) Comprehension Ability Test (CAT).CAT is similar to Turing test, passing of which means we cannot differentiate people from algorithms in term of their comprehension ability. CAT has multiple levels showing the different abilities in reading comprehension, from identifying basic facts, performing inference, to understanding the intent and sentiment.
Locally-biased graph algorithms are algorithms that attempt to find local or small-scale structure in a large data graph. In some cases, this can be accomplished by adding some sort of locality constraint and calling a traditional graph algorithm; but more interesting are locally-biased graph algorithms that compute answers by running a procedure that does not even look at most of the input graph. This corresponds more closely to what practitioners from various data science domains do, but it does not correspond well with the way that algorithmic and statistical theory is typically formulated. Recent work from several research communities has focused on developing locally-biased graph algorithms that come with strong complementary algorithmic and statistical theory and that are useful in practice in downstream data science applications. We provide a review and overview of this work, highlighting commonalities between seemingly-different approaches, and highlighting promising directions for future work.
Timeslices are often used to draw and visualize dynamic graphs. While timeslices are a natural way to think about dynamic graphs, they are routinely imposed on continuous data. Often, it is unclear how many timeslices to select: too few timeslices can miss temporal features such as causality or even graph structure while too many timeslices slows the drawing computation. We present a model for dynamic graphs which is not based on timeslices, and a dynamic graph drawing algorithm, DynNoSlice, to draw graphs in this model. In our evaluation, we demonstrate the advantages of this approach over timeslicing on continuous data sets.