Do you want to publish a course? Click here

Computing Weakly Reversible Deficiency Zero Network Translations Using Elementary Flux Modes

109   0   0.0 ( 0 )
 Added by Matthew Johnston
 Publication date 2018
  fields
and research's language is English




Ask ChatGPT about the research

We present a computational method for performing structural translation, which has been studied recently in the context of analyzing the steady states and dynamical behavior of mass-action systems derived from biochemical reaction networks. Our procedure involves solving a binary linear programming problem where the decision variables correspond to interactions between the reactions of the original network. We call the resulting network a reaction-to-reaction graph and formalize how such a construction relates to the original reaction network and the structural translation. We demonstrate the efficacy and efficiency of the algorithm by running it on 508 networks from the European Bioinformatics Institutes BioModels database. We also summarize how this work can be incorporated into recently proposed algorithms for establishing mono and multistationarity in biochemical reaction systems.

rate research

Read More

Mass-action kinetics is frequently used in systems biology to model the behaviour of interacting chemical species. Many important dynamical properties are known to hold for such systems if they are weakly reversible and have a low deficiency. In particular, the Deficiency Zero and Deficiency One Theorems guarantee strong regularity with regards to the number and stability of positive equilibrium states. It is also known that chemical reaction networks with disparate reaction structure can exhibit the same qualitative dynamics. The theory of linear conjugacy encapsulates the cases where this relationship is captured by a linear transformation. In this paper, we propose a mixed-integer linear programming algorithm capable of determining weakly reversible reaction networks with a minimal deficiency which are linearly conjugate to a given reaction network.
135 - Michael P. Frank 2018
Landauers Principle that information loss from a computation implies entropy increase can be rigorously proved from mathematical physics. However, carefully examining its detailed formulation reveals that the traditional identification of logically reversible computational operations with bijective transformations of the full digital state space is actually not the correct logical-level characterization of the full set of classical computational operations that can be carried out physically with asymptotically zero energy dissipation. To find the correct logical conditions for physical reversibility, we must account for initial-state probabilities when applying the Principle. The minimal logical-level requirement for the physical reversibility of deterministic computational operations is that the subset of initial states that exhibit nonzero probability in a given statistical operating context must be transformed one-to-one into final states. Thus, any computational operation is conditionally reversible relative to any sufficiently-restrictive precondition on its initial state, and the minimum dissipation required for any deterministic operation by Landauers Principle asymptotically approaches 0 when the probability of meeting any preselected one of its suitable preconditions approaches 1. This realization facilitates simpler designs for asymptotically thermodynamically reversible computational hardware, compared to designs that are restricted to using only fully-bijective operations such as Toffoli type operations. Thus, this more general framework for reversible computing provides a more effective theoretical foundation to use for the design of practical reversible computers than does the more restrictive traditional model of reversible logic. In this paper, we formally develop the theoretical foundations of the generalized model, and briefly survey some of its applications.
H.264/Advanced Video Coding (AVC) is one of the most commonly used video compression standard currently. In this paper, we propose a Reversible Data Hiding (RDH) method based on H.264/AVC videos. In the proposed method, the macroblocks with intra-frame $4times 4$ prediction modes in intra frames are first selected as embeddable blocks. Then, the last zero Quantized Discrete Cosine Transform (QDCT) coefficients in all $4times 4$ blocks of the embeddable macroblocks are paired. In the following, a modification mapping rule based on making full use of modification directions are given. Finally, each zero coefficient-pair is changed by combining the given mapping rule with the to-be-embedded information bits. Since most of last QDCT coefficients in all $4times 4$ blocks are zero and they are located in high frequency area. Therefore, the proposed method can obtain high embedding capacity and low distortion.
This paper is concerned with the computation of the high-dimensional zero-norm penalized quantile regression estimator, defined as a global minimizer of the zero-norm penalized check loss function. To seek a desirable approximation to the estimator, we reformulate this NP-hard problem as an equivalent augmented Lipschitz optimization problem, and exploit its coupled structure to propose a multi-stage convex relaxation approach (MSCRA_PPA), each step of which solves inexactly a weighted $ell_1$-regularized check loss minimization problem with a proximal dual semismooth Newton method. Under a restricted strong convexity condition, we provide the theoretical guarantee for the MSCRA_PPA by establishing the error bound of each iterate to the true estimator and the rate of linear convergence in a statistical sense. Numerical comparisons on some synthetic and real data show that MSCRA_PPA not only has comparable even better estimation performance, but also requires much less CPU time.
Weakly-supervised instance segmentation, which could greatly save labor and time cost of pixel mask annotation, has attracted increasing attention in recent years. The commonly used pipeline firstly utilizes conventional image segmentation methods to automatically generate initial masks and then use them to train an off-the-shelf segmentation network in an iterative way. However, the initial generated masks usually contains a notable proportion of invalid masks which are mainly caused by small object instances. Directly using these initial masks to train segmentation model is harmful for the performance. To address this problem, we propose a hybrid network in this paper. In our architecture, there is a principle segmentation network which is used to handle the normal samples with valid generated masks. In addition, a complementary branch is added to handle the small and dim objects without valid masks. Experimental results indicate that our method can achieve significantly performance improvement both on the small object instances and large ones, and outperforms all state-of-the-art methods.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا