ترغب بنشر مسار تعليمي؟ اضغط هنا

The Trade-offs with Space Time Cube Representation of Spatiotemporal Patterns

140   0   0.0 ( 0 )
 نشر من قبل Per Ola Kristensson
 تاريخ النشر 2007
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Space time cube representation is an information visualization technique where spatiotemporal data points are mapped into a cube. Fast and correct analysis of such information is important in for instance geospatial and social visualization applications. Information visualization researchers have previously argued that space time cube representation is beneficial in revealing complex spatiotemporal patterns in a dataset to users. The argument is based on the fact that both time and spatial information are displayed simultaneously to users, an effect difficult to achieve in other representations. However, to our knowledge the actual usefulness of space time cube representation in conveying complex spatiotemporal patterns to users has not been empirically validated. To fill this gap we report on a between-subjects experiment comparing novice users error rates and response times when answering a set of questions using either space time cube or a baseline 2D representation. For some simple questions the error rates were lower when using the baseline representation. For complex questions where the participants needed an overall understanding of the spatiotemporal structure of the dataset, the space time cube representation resulted in on average twice as fast response times with no difference in error rates compared to the baseline. These results provide an empirical foundation for the hypothesis that space time cube representation benefits users when analyzing complex spatiotemporal patterns.



قيم البحث

اقرأ أيضاً

Increased access to mobile devices motivates the need to design communicative visualizations that are responsive to varying screen sizes. However, relatively little design guidance or tooling is currently available to authors. We contribute a detaile d characterization of responsive visualization strategies in communication-oriented visualizations, identifying 76 total strategies by analyzing 378 pairs of large screen (LS) and small screen (SS) visualizations from online articles and reports. Our analysis distinguishes between the Targets of responsive visualization, referring to what elements of a design are changed and Actions representing how targets are changed. We identify key trade-offs related to authors need to maintain graphical density, referring to the amount of information per pixel, while also maintaining the message or intended takeaways for users of a visualization. We discuss implications of our findings for future visualization tool design to support responsive transformation of visualization designs, including requirements for automated recommenders for communication-oriented responsive visualizations.
We revisit the longest common extension (LCE) problem, that is, preprocess a string $T$ into a compact data structure that supports fast LCE queries. An LCE query takes a pair $(i,j)$ of indices in $T$ and returns the length of the longest common pre fix of the suffixes of $T$ starting at positions $i$ and $j$. We study the time-space trade-offs for the problem, that is, the space used for the data structure vs. the worst-case time for answering an LCE query. Let $n$ be the length of $T$. Given a parameter $tau$, $1 leq tau leq n$, we show how to achieve either $O(infrac{n}{sqrt{tau}})$ space and $O(tau)$ query time, or $O(infrac{n}{tau})$ space and $O(tau log({|LCE(i,j)|}/{tau}))$ query time, where $|LCE(i,j)|$ denotes the length of the LCE returned by the query. These bounds provide the first smooth trade-offs for the LCE problem and almost match the previously known bounds at the extremes when $tau=1$ or $tau=n$. We apply the result to obtain improved bounds for several applications where the LCE problem is the computational bottleneck, including approximate string matching and computing palindromes. We also present an efficient technique to reduce LCE queries on two strings to one string. Finally, we give a lower bound on the time-space product for LCE data structures in the non-uniform cell probe model showing that our second trade-off is nearly optimal.
We present time-space trade-offs for computing the Euclidean minimum spanning tree of a set $S$ of $n$ point-sites in the plane. More precisely, we assume that $S$ resides in a random-access memory that can only be read. The edges of the Euclidean mi nimum spanning tree $text{EMST}(S)$ have to be reported sequentially, and they cannot be accessed or modified afterwards. There is a parameter $s in {1, dots, n}$ so that the algorithm may use $O(s)$ cells of read-write memory (called the workspace) for its computations. Our goal is to find an algorithm that has the best possible running time for any given $s$ between $1$ and $n$. We show how to compute $text{EMST}(S)$ in $Obig((n^3/s^2)log s big)$ time with $O(s)$ cells of workspace, giving a smooth trade-off between the two best known bounds $O(n^3)$ for $s = 1$ and $O(n log n)$ for $s = n$. For this, we run Kruskals algorithm on the relative neighborhood graph (RNG) of $S$. It is a classic fact that the minimum spanning tree of $text{RNG}(S)$ is exactly $text{EMST}(S)$. To implement Kruskals algorithm with $O(s)$ cells of workspace, we define $s$-nets, a compact representation of planar graphs. This allows us to efficiently maintain and update the components of the current minimum spanning forest as the edges are being inserted.
We show that there are CNF formulas which can be refuted in resolution in both small space and small width, but for which any small-width proof must have space exceeding by far the linear worst-case upper bound. This significantly strengthens the spa ce-width trade-offs in [Ben-Sasson 09]}, and provides one more example of trade-offs in the supercritical regime above worst case recently identified by [Razborov 16]. We obtain our results by using Razborovs new hardness condensation technique and combining it with the space lower bounds in [Ben-Sasson and Nordstrom 08].
199 - Bowen Yu , Ye Yuan , Loren Terveen 2019
Artificial intelligence algorithms have been used to enhance a wide variety of products and services, including assisting human decision making in high-stakes contexts. However, these algorithms are complex and have trade-offs, notably between predic tion accuracy and fairness to population subgroups. This makes it hard for designers to understand algorithms and design products or services in a way that respects users goals, values, and needs. We proposed a method to help designers and users explore algorithms, visualize their trade-offs, and select algorithms with trade-offs consistent with their goals and needs. We evaluated our method on the problem of predicting criminal defendants likelihood to re-offend through (i) a large-scale Amazon Mechanical Turk experiment, and (ii) in-depth interviews with domain experts. Our evaluations show that our method can help designers and users of these systems better understand and navigate algorithmic trade-offs. This paper contributes a new way of providing designers the ability to understand and control the outcomes of algorithmic systems they are creating.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا