Tighter Bound Estimation of Sensitivity Analysis for Incremental and Decremental Data Modification


الملخص بالإنكليزية

In large-scale classification problems, the data set always be faced with frequent updates when a part of the data is added to or removed from the original data set. In this case, conventional incremental learning, which updates an existing classifier by explicitly modeling the data modification, is more efficient than retraining a new classifier from scratch. However, sometimes, we are more interested in determining whether we should update the classifier or performing some sensitivity analysis tasks. To deal with these such tasks, we propose an algorithm to make rational inferences about the updated linear classifier without exactly updating the classifier. Specifically, the proposed algorithm can be used to estimate the upper and lower bounds of the updated classifiers coefficient matrix with a low computational complexity related to the size of the updated dataset. Both theoretical analysis and experiment results show that the proposed approach is superior to existing methods in terms of tightness of coefficients bounds and computational complexity.

تحميل البحث