Localization transition induced by learning in random searches


Abstract in English

We solve an adaptive search model where a random walker or Levy flight stochastically resets to previously visited sites on a $d$-dimensional lattice containing one trapping site. Due to reinforcement, a phase transition occurs when the resetting rate crosses a threshold above which non-diffusive stationary states emerge, localized around the inhomogeneity. The threshold depends on the trapping strength and on the walkers return probability in the memoryless case. The transition belongs to the same class as the self-consistent theory of Anderson localization. These results show that similarly to many living organisms and unlike the well-studied Markovian walks, non-Markov movement processes can allow agents to learn about their environment and promise to bring adaptive solutions in search tasks.

Download