No Arabic abstract
The World Health Organisation currently recommends pre-screening for past infection prior to administration of the only licensed dengue vaccine, CYD-TDV. Using a bounding analysis, we show that despite additional testing costs, this approach can improve the economic viability of CYD-TDV: effective testing reduces unnecessary vaccination costs while increasing the health benefit for vaccine recipients. When testing is cheap enough, those trends outweigh additional screening costs and make test-then-vaccinate strategies net-beneficial in many settings. We derived these results using a general approach for determining price thresholds for testing and vaccination, as well as indicating optimal start and end ages of routine test-then-vaccinate programs. This approach only requires age-specific seroprevalence and a cost estimate for second infections. We demonstrate this approach across settings commonly used to evaluate CYD-TDV economics, and highlight implications of our simple model for more detailed studies. We found trends showing test-then-vaccinate strategies are generally more beneficial starting at younger ages, and that in some settings multiple years of testing can be more beneficial than only testing once, despite increased investment in testing.
We use a stochastic Markovian dynamics approach to describe the spreading of vector-transmitted diseases, like dengue, and the threshold of the disease. The coexistence space is composed by two structures representing the human and mosquito populations. The human population follows a susceptible-infected-recovered (SIR) type dynamics and the mosquito population follows a susceptible-infected-susceptible (SIS) type dynamics. The human infection is caused by infected mosquitoes and vice-versa so that the SIS and SIR dynamics are interconnected. We develop a truncation scheme to solve the evolution equations from which we get the threshold of the disease and the reproductive ratio. The threshold of the disease is also obtained by performing numerical simulations. We found that for certain values of the infection rates the spreading of the disease is impossible whatever is the death rate of infected mosquito.
Mosquitoes are vectors of viral diseases with epidemic potential in many regions of the world, and in absence of vaccines or therapies, their control is the main alternative. Chemical control through insecticides has been one of the conventional strategies, but induces insecticide resistance, which may affect other insects and cause ecological damage. Biological control, through the release of mosquitoes infected by the maternally inherited bacterium Wolbachia, which inhibits their vector competence, has been proposed as an alternative. The effects of both techniques may be intermingled in practice: prior insecticide spraying may debilitate wild population, so facilitating subsequent invasion by the bacterium; but the latter may also be hindered by the release of susceptible mosquitoes in an environment where the wild population became resistant, as a result of preexisting undesired exposition to insecticide. To tackle such situations, we propose here a unifying model allowing to account for the cross effects of both control techniques, and based on the latter, design release strategies able to infect a wild population. The latter are feedback laws, whose stabilizing properties are studied.
We predict vaccine efficacy with a measure of antigenic distance between influenza A(H3N2) and candidate vaccine viruses based on amino acid substitutions in the dominant epitopes. In 2016-2017, our model predicts 19% efficacy compared to 20% observed. This tool assists candidate vaccine selection by predicting human protection against circulating strains.
Vaccination against COVID-19 with the recently approved mRNA vaccines BNT162b2 (BioNTech/Pfizer) and mRNA-1273 (Moderna) is currently underway in a large number of countries. However, high incidence rates and rapidly spreading SARS-CoV-2 variants are concerning. In combination with acute supply deficits in Europe in early 2021, the question arises of whether stretching the vaccine, for instance by delaying the second dose, can make a significant contribution to preventing deaths, despite associated risks such as lower vaccine efficacy, the potential emergence of escape mutants, enhancement, waning immunity, reduced social acceptance of off-label vaccination, and liability shifts. A quantitative epidemiological assessment of risks and benefits of non-standard vaccination protocols remains elusive. To clarify the situation and to provide a quantitative epidemiological foundation we develop a stochastic epidemiological model that integrates specific vaccine rollout protocols into a risk-group structured infectious disease dynamical model. Using the situation and conditions in Germany as a reference system, we show that delaying the second vaccine dose is expected to prevent deaths in the four to five digit range, should the incidence resurge. We show that this considerable public health benefit relies on the fact that both mRNA vaccines provide substantial protection against severe COVID-19 and death beginning 12 to 14 days after the first dose. The benefits of protocol change are attenuated should vaccine compliance decrease substantially. To quantify the impact of protocol change on vaccination adherence we performed a large-scale online survey. We find that, in Germany, changing vaccination protocols may lead to small reductions in vaccination intention. In sum, we therefore expect the benefits of a strategy change to remain substantial and stable.
Efficient testing and vaccination protocols are critical aspects of epidemic management. To study the optimal allocation of limited testing and vaccination resources in a heterogeneous contact network of interacting susceptible, recovered, and infected individuals, we present a degree-based testing and vaccination model for which we use control-theoretic methods to derive optimal testing and vaccination policies. Within our framework, we find that optimal intervention policies first target high-degree nodes before shifting to lower-degree nodes in a time-dependent manner. Using such optimal policies, it is possible to delay outbreaks and reduce incidence rates to a greater extent than uniform and reinforcement-learning-based interventions, particularly on certain scale-free networks.