Adversarial attacks have been alerting the artificial intelligence community recently, since many machine learning algorithms were found vulnerable to malicious attacks. This paper studies adversarial attacks to scale-free networks to test their robustness in terms of statistical measures. In addition to the well-known random link rewiring (RLR) attack, two heuristic attacks are formulated and simulated: degree-addition-based link rewiring (DALR) and degree-interval-based link rewiring (DILR). These three strategies are applied to attack a number of strong scale-free networks of various sizes generated from the Barabasi-Albert model. It is found that both DALR and DILR are more effective than RLR, in the sense that rewiring a smaller number of links can succeed in the same attack. However, DILR is as concealed as RLR in the sense that they both are constructed by introducing a relatively small number of changes on several typical structural properties such as average shortest path-length, average clustering coefficient, and average diagonal distance. The results of this paper suggest that to classify a network to be scale-free has to be very careful from the viewpoint of adversarial attack effects.