No Arabic abstract
Observability and controllability are essential concepts to the design of predictive observer models and feedback controllers of networked systems. For example, noncontrollable mathematical models of real systems have subspaces that influence model behavior, but cannot be controlled by an input. Such subspaces can be difficult to determine in complex nonlinear networks. Since almost all of the present theory was developed for linear networks without symmetries, here we present a numerical and group representational framework, to quantify the observability and controllability of nonlinear networks with explicit symmetries that shows the connection between symmetries and nonlinear measures of observability and controllability. We numerically observe and theoretically predict that not all symmetries have the same effect on network observation and control. Our analysis shows that the presence of symmetry in a network may decrease observability and controllability, although networks containing only rotational symmetries remain controllable and observable. These results alter our view of the nature of observability and controllability in complex networks, change our understanding of structural controllability, and affect the design of mathematical models to observe and control such networks.
A new paradigm has recently emerged in brain science whereby communications between glial cells and neuron-glia interactions should be considered together with neurons and their networks to understand higher brain functions. In particular, astrocytes, the main type of glial cells in the cortex, have been shown to communicate with neurons and with each other. They are thought to form a gap-junction-coupled syncytium supporting cell-cell communication via propagating Ca2+ waves. An identified mode of propagation is based on cytoplasm-to-cytoplasm transport of inositol trisphosphate (IP3) through gap junctions that locally trigger Ca2+ pulses via IP3-dependent Ca2+-induced Ca2+ release. It is, however, currently unknown whether this intracellular route is able to support the propagation of long-distance regenerative Ca2+ waves or is restricted to short-distance signaling. Furthermore, the influence of the intracellular signaling dynamics on intercellular propagation remains to be understood. In this work, we propose a model of the gap-junctional route for intercellular Ca2+ wave propagation in astrocytes showing that: (1) long-distance regenerative signaling requires nonlinear coupling in the gap junctions, and (2) even with nonlinear gap junctions, long-distance regenerative signaling is favored when the internal Ca2+ dynamics implements frequency modulation-encoding oscillations with pulsating dynamics, while amplitude modulation-encoding dynamics tends to restrict the propagation range. As a result, spatially heterogeneous molecular properties and/or weak couplings are shown to give rise to rich spatiotemporal dynamics that support complex propagation behaviors. These results shed new light on the mechanisms implicated in the propagation of Ca2+ waves across astrocytes and precise the conditions under which glial cells may participate in information processing in the brain.
Finite-state systems have applications in systems biology, formal verification and synthesis problems of infinite-state (hybrid) systems, etc. As deterministic finite-state systems, logical control networks (LCNs) consist of a finite number of nodes which can be in a finite number of states and update their states. In this paper, we investigate the synthesis problem for controllability and observability of LCNs by state feedback under the semitensor product framework. We show that state feedback can never enforce controllability of an LCN, but sometimes can enforce its observability. We prove that for an LCN $Sig$ and another LCN $Sig$ obtained by feeding a state-feedback controller into $Sig$, (1) if $Sig$ is controllable, then $Sig$ can be either controllable or not; (2) if $Sig$ is not controllable, then $Sig$ is not controllable either; (3) if $Sig$ is observable, then $Sig$ can be either observable or not; (4) if $Sig$ is not observable, $Sig$ can also be observable or not. We also prove that if an unobservable LCN can be synthesized to be observable by state feedback, then it can also be synthesized to be observable by closed-loop state feedback (i.e., state feedback without any input). Furthermore, we give an upper bound for the number of closed-loop state-feedback controllers that are needed to verify whether an unobservable LCN can be synthesized to be observable by state feedback.
In spite of the recent interest and advances in linear controllability of complex networks, controlling nonlinear network dynamics remains to be an outstanding problem. We develop an experimentally feasible control framework for nonlinear dynamical networks that exhibit multistability (multiple coexisting final states or attractors), which are representative of, e.g., gene regulatory networks (GRNs). The control objective is to apply parameter perturbation to drive the system from one attractor to another, assuming that the former is undesired and the latter is desired. To make our framework practically useful, we consider RESTRICTED parameter perturbation by imposing the following two constraints: (a) it must be experimentally realizable and (b) it is applied only temporarily. We introduce the concept of ATTRACTOR NETWORK, in which the nodes are the distinct attractors of the system, and there is a directional link from one attractor to another if the system can be driven from the former to the latter using restricted control perturbation. Introduction of the attractor network allows us to formulate a controllability framework for nonlinear dynamical networks: a network is more controllable if the underlying attractor network is more strongly connected, which can be quantified. We demonstrate our control framework using examples from various models of experimental GRNs. A finding is that, due to nonlinearity, noise can counter-intuitively facilitate control of the network dynamics.
We present cortical surface parcellation using spherical deep convolutional neural networks. Traditional multi-atlas cortical surface parcellation requires inter-subject surface registration using geometric features with high processing time on a single subject (2-3 hours). Moreover, even optimal surface registration does not necessarily produce optimal cortical parcellation as parcel boundaries are not fully matched to the geometric features. In this context, a choice of training features is important for accurate cortical parcellation. To utilize the networks efficiently, we propose cortical parcellation-specific input data from an irregular and complicated structure of cortical surfaces. To this end, we align ground-truth cortical parcel boundaries and use their resulting deformation fields to generate new pairs of deformed geometric features and parcellation maps. To extend the capability of the networks, we then smoothly morph cortical geometric features and parcellation maps using the intermediate deformation fields. We validate our method on 427 adult brains for 49 labels. The experimental results show that our method out-performs traditional multi-atlas and naive spherical U-Net approaches, while achieving full cortical parcellation in less than a minute.
In this paper we present a novel approach to automatically infer parameters of spiking neural networks. Neurons are modelled as timed automata waiting for inputs on a number of different channels (synapses), for a given amount of time (the accumulation period). When this period is over, the current potential value is computed considering current and past inputs. If this potential overcomes a given threshold, the automaton emits a broadcast signal over its output channel , otherwise it restarts another accumulation period. After each emission, the automaton remains inactive for a fixed refractory period. Spiking neural networks are formalised as sets of automata, one for each neuron, running in parallel and sharing channels according to the network structure. Such a model is formally validated against some crucial properties defined via proper temporal logic formulae. The model is then exploited to find an assignment for the synaptical weights of neural networks such that they can reproduce a given behaviour. The core of this approach consists in identifying some correcting actions adjusting synaptical weights and back-propagating them until the expected behaviour is displayed. A concrete case study is discussed.