Exactly solving multi-objective integer programming (MOIP) problems is often a very time consuming process, especially for large and complex problems. Parallel computing has the potential to significantly reduce the time taken to solve such problems, but only if suitable algorithms are used. The first of our new algorithms follows a simple technique that demonstrates impressive performance for its design. We then go on to introduce new theory for developing more efficient parallel algorithms. The theory utilises elements of the symmetric group to apply a permutation to the objective functions to assign different workloads, and applies to algorithms that order the objective functions lexicographically. As a result, information and updated bounds can be shared in real time, creating a synergy between threads. We design and implement two algorithms that take advantage of such theory. To properly analyse the running time of our three algorithms, we compare them against two existing algorithms from the literature, and against using multiple threads within our chosen IP solver, CPLEX. This survey of six different parallel algorithms, the first of its kind, demonstrates the advantages of parallel computing. Across all problem types tested, our new algorithms are on par with existing algorithms on smaller cases and massively outperform the competition on larger cases. These new algorithms, and freely available implementations, allows the investigation of complex MOIP problems with four or more objectives.