Do you want to publish a course? Click here

Bijections for Ranked Tree-Child Networks

86   0   0.0 ( 0 )
 Added by Michael Fuchs
 Publication date 2021
  fields
and research's language is English




Ask ChatGPT about the research

The class of ranked tree-child networks, tree-child networks arising from an evolution process with a fixed embedding into the plane, has recently been introduced by Bienvenu, Lambert, and Steel. These authors derived counting results for this class. In this note, we will give bijective proofs of three of their results. Two of our bijections answer questions raised in their paper.



rate research

Read More

Tree-child networks are a recently-described class of directed acyclic graphs that have risen to prominence in phylogenetics (the study of evolutionary trees and networks). Although these networks have a number of attractive mathematical properties, many combinatorial questions concerning them remain intractable. In this paper, we show that endowing these networks with a biologically relevant ranking structure yields mathematically tractable objects, which we term ranked tree-child networks (RTCNs). We explain how to derive exact and explicit combinatorial results concerning the enumeration and generation of these networks. We also explore probabilistic questions concerning the properties of RTCNs when they are sampled uniformly at random. These questions include the lengths of random walks between the root and leaves (both from the root to the leaves and from a leaf to the root); the distribution of the number of cherries in the network; and sampling RTCNs conditional on displaying a given tree. We also formulate a conjecture regarding the scaling limit of the process that counts the number of lineages in the ancestry of a leaf. The main idea in this paper, namely using ranking as a way to achieve combinatorial tractability, may also extend to other classes of networks.
Phylogenetic trees canonically arise as embeddings of phylogenetic networks. We recently showed that the problem of deciding if two phylogenetic networks embed the same sets of phylogenetic trees is computationally hard, blue{in particular, we showed it to be $Pi^P_2$-complete}. In this paper, we establish a polynomial-time algorithm for this decision problem if the initial two networks consists of a normal network and a tree-child network. The running time of the algorithm is quadratic in the size of the leaf sets.
406 - Heesung Shin , Jiang Zeng 2020
The Euler number $E_n$ (resp. Entringer number $E_{n,k}$) enumerates the alternating (down-up) permutations of ${1,dots,n}$ (resp. starting with $k$). The Springer number $S_n$ (resp. Arnold number $S_{n,k}$) enumerates the type $B$ alternating permutations (resp. starting with $k$). In this paper, using bijections we first derive the counterparts in {em Andre permutations} and {em Simsun permutations} for the Entringer numbers $(E_{n,k})$, and then the counterparts in {em signed Andre permutations} and {em type $B$ increasing 1-2 trees} for the Arnold numbers $(S_{n,k})$.
143 - David Callan 2017
We show bijectively that Dyck paths with all peaks at odd height are counted by the Motzkin numbers and Dyck paths with all peaks at even height are counted by the Riordan numbers.
Generating video descriptions automatically is a challenging task that involves a complex interplay between spatio-temporal visual features and language models. Given that videos consist of spatial (frame-level) features and their temporal evolutions, an effective captioning model should be able to attend to these different cues selectively. To this end, we propose a Spatio-Temporal and Temporo-Spatial (STaTS) attention model which, conditioned on the language state, hierarchically combines spatial and temporal attention to videos in two different orders: (i) a spatio-temporal (ST) sub-model, which first attends to regions that have temporal evolution, then temporally pools the features from these regions; and (ii) a temporo-spatial (TS) sub-model, which first decides a single frame to attend to, then applies spatial attention within that frame. We propose a novel LSTM-based temporal ranking function, which we call ranked attention, for the ST model to capture action dynamics. Our entire framework is trained end-to-end. We provide experiments on two benchmark datasets: MSVD and MSR-VTT. Our results demonstrate the synergy between the ST and TS modules, outperforming recent state-of-the-art methods.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا