Outsourcing neural network inference tasks to an untrusted cloud raises data privacy and integrity concerns. To address these challenges, several privacy-preserving and verifiable inference techniques have been proposed based on replacing the non-polynomial activation functions such as the rectified linear unit (ReLU) function with polynomial activation functions. Such techniques usually require polynomials with integer coefficients or polynomials over finite fields. Motivated by such requirements, several works proposed replacing the ReLU activation function with the square activation function. In this work, we empirically show that the square function is not the best degree-$2$ polynomial that can replace the ReLU function even when restricting the polynomials to have integer coefficients. We instead propose a degree-$2$ polynomial activation function with a first order term and empirically show that it can lead to much better models. Our experiments on the CIFAR-$10$ and CIFAR-$100$ datasets on various architectures show that our proposed activation function improves the test accuracy by up to $9.4%$ compared to the square function.