The Kolmogorov axioms for probability functions are placed in the context of signed meadows. A completeness theorem is stated and proven for the resulting equational theory of probability calculus. Elementary definitions of probability theory are restated in this framework.
Meadows have been proposed as alternatives for fields with a purely equational axiomatization. At the basis of meadows lies the decision to make the multiplicative inverse operation total by imposing that the multiplicative inverse of zero is zero. Thus, the multiplicative inverse operation of a meadow is an involution. In this paper, we study `non-involutive meadows, i.e. variants of meadows in which the multiplicative inverse of zero is not zero, and pay special attention to non-involutive meadows in which the multiplicative inverse of zero is one.
We study the question, ``For which reals $x$ does there exist a measure $mu$ such that $x$ is random relative to $mu$? We show that for every nonrecursive $x$, there is a measure which makes $x$ random without concentrating on $x$. We give several conditions on $x$ equivalent to there being continuous measure which makes $x$ random. We show that for all but countably many reals $x$ these conditions apply, so there is a continuous measure which makes $x$ random. There is a meta-mathematical aspect of this investigation. As one requires higher arithmetic levels in the degree of randomness, one must make use of more iterates of the power set of the continuum to show that for all but countably many $x$s there is a continuous $mu$ which makes $x$ random to that degree.
Neural implicit shape representations are an emerging paradigm that offers many potential benefits over conventional discrete representations, including memory efficiency at a high spatial resolution. Generalizing across shapes with such neural implicit representations amounts to learning priors over the respective function space and enables geometry reconstruction from partial or noisy observations. Existing generalization methods rely on conditioning a neural network on a low-dimensional latent code that is either regressed by an encoder or jointly optimized in the auto-decoder framework. Here, we formalize learning of a shape space as a meta-learning problem and leverage gradient-based meta-learning algorithms to solve this task. We demonstrate that this approach performs on par with auto-decoder based approaches while being an order of magnitude faster at test-time inference. We further demonstrate that the proposed gradient-based method outperforms encoder-decoder based methods that leverage pooling-based set encoders.
Probability logic has contributed to significant developments in belief types for game-theoretical economics. We present a new probability logic for Harsanyi Type spaces, show its completeness, and prove both a de-nesting property and a unique extension theorem. We then prove that multi-agent interactive epistemology has greater complexity than its single-agent counterpart by showing that if the probability indices of the belief language are restricted to a finite set of rationals and there are finitely many propositional letters, then the canonical space for probabilistic beliefs with one agent is finite while the canonical one with at least two agents has the cardinality of the continuum. Finally, we generalize the three notions of definability in multimodal logics to logics of probabilistic belief and knowledge, namely implicit definability, reducibility, and explicit definability. We find that S5-knowledge can be implicitly defined by probabilistic belief but not reduced to it and hence is not explicitly definable by probabilistic belief.
A emph{meadow} is a commutative ring with an inverse operator satisfying $0^{-1}=0$. We determine the initial algebra of the meadows of characteristic 0 and show that its word problem is decidable.