ﻻ يوجد ملخص باللغة العربية
In a recent approach, we proposed to model an access control mechanism as a Markov Decision Process, thus claiming that in order to make an access control decision, one can use well-defined mechanisms from decision theory. We present in this paper an implementation of such mechanism, using the open-source solver GLPK, and we model the problem in the GMPL language. We illustrate our approach with a simple, yet expressive example, and we show how the variation of some parameters can change the final outcome. In particular, we show that in addition to returning a decision, we can also calculate the value of each decision.
The objective of this work is to study continuous-time Markov decision processes on a general Borel state space with both impulsive and continuous controls for the infinite-time horizon discounted cost. The continuous-time controlled process is shown
Coordination of distributed agents is required for problems arising in many areas, including multi-robot systems, networking and e-commerce. As a formal framework for such problems, we use the decentralized partially observable Markov decision proces
Security researchers have stated that the core concept behind current implementations of access control predates the Internet. These assertions are made to pinpoint that there is a foundational gap in this field, and one should consider revisiting th
We study a novel variant of online finite-horizon Markov Decision Processes with adversarially changing loss functions and initially unknown dynamics. In each episode, the learner suffers the loss accumulated along the trajectory realized by the poli
We consider the problem of constrained Markov Decision Process (CMDP) where an agent interacts with a unichain Markov Decision Process. At every interaction, the agent obtains a reward. Further, there are $K$ cost functions. The agent aims to maximiz