No Arabic abstract
Requirements on subsystems have been made clear in this paper for a linear time invariant (LTI) networked dynamic system (NDS), under which subsystem interconnections can be estimated from external output measurements. In this NDS, subsystems may have distinctive dynamics, and subsystem interconnections are arbitrary. It is assumed that system matrices of each subsystem depend on its (pseudo) first principle parameters (FPPs) through a linear fractional transformation (LFT). It has been proven that if in each subsystem, the transfer function matrix (TFM) from its internal inputs to its external outputs is of full normal column rank (FNCR), while the TFM from its external inputs to its internal outputs is of full normal row rank (FNRR), then the structure of the NDS is identifiable. Moreover, under some particular situations like there are no direct information transmission from an internal input to an internal output in each subsystem, a necessary and sufficient condition is established for NDS structure identifiability. A matrix valued polynomial (MVP) rank based equivalent condition is further derived, which depends affinely on subsystem (pseudo) FPPs and can be independently verified for each subsystem. From this condition, some necessary conditions are obtained for both subsystem dynamics and its (pseudo) FPPs, using the Kronecker canonical form (KCF) of a matrix pencil.
This paper investigates requirements on a networked dynamic system (NDS) such that its subsystem interactions can be solely determined from experiment data or reconstructed from its overall model. The NDS is constituted from several subsystems whose dynamics are described through a descriptor form. Except regularity on each subsystem and the whole NDS, no other restrictions are put on either subsystem dynamics or subsystem interactions. A matrix rank based necessary and sufficient condition is derived for the global identifiability of subsystem interactions, which leads to several conclusions about NDS structure identifiability when there is some a priori information. This matrix also gives an explicit description for the set of subsystem interactions that can not be distinguished from experiment data only. In addition, under a well-posedness assumption, a necessary and sufficient condition is obtained for the reconstructibility of subsystem interactions from an NDS descriptor form model. This condition can be verified with each subsystem separately and is therefore attractive in the analysis and synthesis of a large-scale NDS. Simulation results show that rather than increases monotonically with the distance of subsystem interactions to the undifferentiable set, the magnitude of the external output differences between two NDSs with distinct subsystem interactions increases much more rapidly when one of them is close to be unstable. In addition, directions of probing signals are also very important in distinguishing external outputs of distinctive NDSs.These findings are expected to be helpful in identification experiment designs, etc.
Requirements are investigated in this paper for each descriptor form subsystem, with which a causal/impulse free networked dynamic system (NDS) can be constructed. For this purpose, a matrix rank based necessary and sufficient condition is at first derived for the causality/impulse freeness of an NDS, in which the associated matrix depends affinely on subsystem connections. From this result, a necessary and sufficient condition is derived for each subsystem, such that there exists a subsystem connection matrix that leads to a causal/impulse free NDS. This condition further leads to a necessary and sufficient condition for the existence of a local static output feedback that guarantees the construction of a causal/impulse free NDS. A prominent property of these conditions are that all the involved numerical computations are performed independently on each individual subsystem, which is quite attractive in reducing computation costs and improving numerical stability for large scale NDS analysis and synthesis. Situations have also been clarified in which NDS causality/impulse freeness is independent of subsystem connections. It has also been made clear that under some situations, local static output feedbacks are not helpful in constructing a causal NDS.
Identifiability of a single module in a network of transfer functions is determined by the question whether a particular transfer function in the network can be uniquely distinguished within a network model set, on the basis of data. Whereas previous research has focused on the situations that all network signals are either excited or measured, we develop generalized analysis results for the situation of partial measurement and partial excitation. As identifiability conditions typically require a sufficient number of external excitation signals, this work introduces a novel network model structure such that excitation from unmeasured noise signals is included, which leads to less conservative identifiability conditions than relying on measured excitation signals only. More importantly, graphical conditions are developed to verify global and generic identifiability of a single module based on the topology of the dynamic network. Depending on whether the input or the output of the module can be measured, we present four identifiability conditions which cover all possible situations in single module identification. These conditions further lead to synthesis approaches for allocating excitation signals and selecting measured signals, to warrant single module identifiability. In addition, if the identifiability conditions are satisfied, indirect identification methods are developed to provide a consistent estimate of the module. All the obtained results are also extended to identifiability of multiple modules in the network.
In this paper, we analyze the two-node joint clock synchronization and ranging problem. We focus on the case of nodes that employ time-to-digital converters to determine the range between them precisely. This specific design choice leads to a sawtooth model for the captured signal, which has not been studied before from an estimation theoretic standpoint. In the study of this model, we recover the basic conclusion of a well-known article by Freris, Graham, and Kumar in clock synchronization. More importantly, we discover a surprising identifiability result on the sawtooth signal model: noise improves the theoretical condition of the estimation of the phase and offset parameters. To complete our study, we provide performance references for joint clock synchronization and ranging using the sawtooth signal model by presenting an exhaustive simulation study on basic estimation strategies under different realistic conditions. With our contributions in this paper, we enable further research in the estimation of sawtooth signal models and pave the path towards their industrial use for clock synchronization and ranging.
Among the versatile forms of dynamical patterns of activity exhibited by the brain, oscillations are one of the most salient and extensively studied, yet are still far from being well understood. In this paper, we provide various structural characterizations of the existence of oscillatory behavior in neural networks using a classical neural mass model of mesoscale brain activity called linear-threshold dynamics. Exploiting the switched-affine nature of this dynamics, we obtain various necessary and/or sufficient conditions on the network structure and its external input for the existence of oscillations in (i) two-dimensional excitatory-inhibitory networks (E-I pairs), (ii) networks with one inhibitory but arbitrary number of excitatory nodes, (iii) purely inhibitory networks with an arbitrary number of nodes, and (iv) networks of E-I pairs. Throughout our treatment, and given the arbitrary dimensionality of the considered dynamics, we rely on the lack of stable equilibria as a system-based proxy for the existence of oscillations, and provide extensive numerical results to support its tight relationship with the more standard, signal-based definition of oscillations in computational neuroscience.