ترغب بنشر مسار تعليمي؟ اضغط هنا

Finite Dimensional Approximation to Muscular Response in Force-Fatigue Dynamics using Functional Electrical Stimulation

33   0   0.0 ( 0 )
 نشر من قبل Jeremy Rouot
 تاريخ النشر 2021
  مجال البحث
والبحث باللغة English
 تأليف Toufik Bakir




اسأل ChatGPT حول البحث

Recent dynamical models, based on the seminal work of V. Hill, allow to predict the muscular response to functional electrostimulation (FES), in the isometric and non-isometric cases. The physical controls are modeled as Dirac pulses and lead to a sampled-data control system, sampling corresponding to times of the stimulation, where the output is the muscular force response. Such a dynamics is suitable to compute optimized controls aiming to produce a constant force or force strengthening, but is complex for real time applications. The objective of this article is to construct a finite dimensional approximation of this response to provide fast optimizing schemes, in particular for the design of a smart electrostimulator for muscularreinforcement or rehabilitation. It is an on-going industrial project based on force-fatigue models, validated by experiments.Moreover it opens the road to application of optimal control to track a reference trajectory in the joint angular variable to produce movement in the non-isometric models.



قيم البحث

اقرأ أيضاً

Functional electrical stimulation (FES) is used to activate the dysfunctional lower limb muscles of individuals with neuromuscular disorders to produce cycling as a means of exercise and rehabilitation. However, FES-cycling is still metabolically ine fficient and yields low power output at the cycle crank compared to able-bodied cycling. Previous literature suggests that these problems are symptomatic of poor muscle control and non-physiological muscle fiber recruitment. The latter is a known problem with FES in general, and the former motivates investigation of better control methods for FES-cycling.In this paper, a stimulation pattern for quadriceps femoris-only FES-cycling is derived based on the effectiveness of knee joint torque in producing forward pedaling. In addition, a switched sliding-mode controller is designed for the uncertain, nonlinear cycle-rider system with autonomous state-dependent switching. The switched controller yields ultimately bounded tracking of a desired trajectory in the presence of an unknown, time-varying, bounded disturbance, provided a reverse dwell-time condition is satisfied by appropriate choice of the control gains and a sufficient desired cadence. Stability is derived through Lyapunov methods for switched systems, and experimental results demonstrate the performance of the switched control system under typical cycling conditions.
Human movement disorders or paralysis lead to the loss of control of muscle activation and thus motor control. Functional Electrical Stimulation (FES) is an established and safe technique for contracting muscles by stimulating the skin above a muscle to induce its contraction. However, an open challenge remains on how to restore motor abilities to human limbs through FES, as the problem of controlling the stimulation is unclear. We are taking a robotics perspective on this problem, by developing robot learning algorithms that control the ultimate humanoid robot, the human body, through electrical muscle stimulation. Human muscles are not trivial to control as actuators due to their force production being non-stationary as a result of fatigue and other internal state changes, in contrast to robot actuators which are well-understood and stationary over broad operation ranges. We present our Deep Reinforcement Learning approach to the control of human muscles with FES, using a recurrent neural network for dynamic state representation, to overcome the unobserved elements of the behaviour of human muscles under external stimulation. We demonstrate our technique both in neuromuscular simulations but also experimentally on a human. Our results show that our controller can learn to manipulate human muscles, applying appropriate levels of stimulation to achieve the given tasks while compensating for advancing muscle fatigue which arises throughout the tasks. Additionally, our technique can learn quickly enough to be implemented in real-world human-in-the-loop settings.
We consider the dynamic inventory problem with non-stationary demands. It has long been known that non-stationary (s, S) policies are optimal for this problem. However, finding optimal policy parameters remains a computational challenge as it require s solving a large-scale stochastic dynamic program. To address this, we devise a recursion-free approximation for the optimal cost function of the problem. This enables us to compute policy parameters heuristically, without resorting to a stochastic dynamic program. The heuristic is easy-to-understand and -use since it follows by elementary methods of convex minimization and shortest paths, yet it is very effective and outperforms earlier heuristics.
373 - Li-Gang Cao , G. Colo , H. Sagawa 2009
We present a thorough analysis of the effects of the tensor interaction on the multipole response of magic nuclei, using the fully self-consistent Random Phase Approximation (RPA) model with Skyrme interactions. We disentangle the modifications to th e static mean field induced by the tensor terms, and the specific features of the residual particle-hole (p-h) tensor interaction, for quadrupole (2+), octupole (3-), and also magnetic dipole (1+) responses. It is pointed out that the tensor force has a larger effect on the magnetic dipole states than on the natural parity states 2+ and 3-, especially at the mean field level. Perspectives for a better assessment of the tensor force parameters are eventually discussed.
104 - Guannan Qu , Adam Wierman 2020
We consider a general asynchronous Stochastic Approximation (SA) scheme featuring a weighted infinity-norm contractive operator, and prove a bound on its finite-time convergence rate on a single trajectory. Additionally, we specialize the result to a synchronous $Q$-learning. The resulting bound matches the sharpest available bound for synchronous $Q$-learning, and improves over previous known bounds for asynchronous $Q$-learning.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا