Malicious software (malware) is a major cyber threat that has to be tackled with Machine Learning (ML) techniques because millions of new malware examples are injected into cyberspace on a daily basis. However, ML is vulnerable to attacks known as adversarial examples. In this paper, we survey and systematize the field of Adversarial Malware Detection (AMD) through the lens of a unified conceptual framework of assumptions, attacks, defenses, and security properties. This not only leads us to map attacks and defenses to partial order structures, but also allows us to clearly describe the attack-defense arms race in the AMD context. We draw a number of insights, including: knowing the defenders feature set is critical to the success of transfer attacks; the effectiveness of practical evasion attacks largely depends on the attackers freedom in conducting manipulations in the problem space; knowing the attackers manipulation set is critical to the defenders success; the effectiveness of adversarial training depends on the defenders capability in identifying the most powerful attack. We also discuss a number of future research directions.