The ability to capture complex linguistic structures and long-term dependencies among words in the passage is essential for relation extraction (RE) tasks. Graph neural networks (GNNs), one of the means to encode dependency graphs, have been shown to be effective in prior works. However, relatively little attention has been paid to receptive fields of GNNs, which can be crucial for tasks with extremely long text that requires discourse understanding. In this work, we leverage the idea of graph pooling and propose the Mirror Graph Convolution Network, a GNN model with a pooling-unpooling structure tailored to RE tasks. The pooling branch reduces the graph size and enables the GNN to obtain larger receptive fields within fewer layers; the unpooling branch restores the pooled graph to its original resolution for token-level RE tasks. Experiments on two discourse-level relation extraction datasets demonstrate the effectiveness of our method, showing significant improvements over prior methods especially when modeling long-term dependencies is necessary. Moreover, we propose Clause Matching (CM), a novel graph pooling method that merges nodes based on dependency relations in graph. CM can largely reduce the graph size while retaining the main semantics of the input text.