Emergency control, typically such as under-voltage load shedding (UVLS), is broadly used to grapple with low voltage and voltage instability issues in practical power systems under contingencies. However, existing emergency control schemes are rule-based and cannot be adaptively applied to uncertain and floating operating conditions. This paper proposes an adaptive UVLS algorithm for emergency control via deep reinforcement learning (DRL) and expert systems. We first construct dynamic components for picturing the power system operation as the environment. The transient voltage recovery criteria, which poses time-varying requirements to UVLS, is integrated into the states and reward function to advise the learning of deep neural networks. The proposed approach has no tuning issue of coefficients in reward functions, and this issue was regarded as a deficiency in the existing DRL-based algorithms. Extensive case studies illustrate that the proposed method outperforms the traditional UVLS relay in both the timeliness and efficacy for emergency control.