ترغب بنشر مسار تعليمي؟ اضغط هنا

Testing for Unobserved Heterogeneity via k-means Clustering

83   0   0.0 ( 0 )
 نشر من قبل Andrew Patton
 تاريخ النشر 2019
  مجال البحث اقتصاد
والبحث باللغة English




اسأل ChatGPT حول البحث

Clustering methods such as k-means have found widespread use in a variety of applications. This paper proposes a formal testing procedure to determine whether a null hypothesis of a single cluster, indicating homogeneity of the data, can be rejected in favor of multiple clusters. The test is simple to implement, valid under relatively mild conditions (including non-normality, and heterogeneity of the data in aspects beyond those in the clustering analysis), and applicable in a range of contexts (including clustering when the time series dimension is small, or clustering on parameters other than the mean). We verify that the test has good size control in finite samples, and we illustrate the test in applications to clustering vehicle manufacturers and U.S. mutual funds.



قيم البحث

اقرأ أيضاً

83 - Yu-Chin Hsu , Ta-Cheng Huang , 2018
Unobserved heterogeneous treatment effects have been emphasized in the recent policy evaluation literature (see e.g., Heckman and Vytlacil, 2005). This paper proposes a nonparametric test for unobserved heterogeneous treatment effects in a treatment effect model with a binary treatment assignment, allowing for individuals self-selection to the treatment. Under the standard local average treatment effects assumptions, i.e., the no defiers condition, we derive testable model restrictions for the hypothesis of unobserved heterogeneous treatment effects. Also, we show that if the treatment outcomes satisfy a monotonicity assumption, these model restrictions are also sufficient. Then, we propose a modified Kolmogorov-Smirnov-type test which is consistent and simple to implement. Monte Carlo simulations show that our test performs well in finite samples. For illustration, we apply our test to study heterogeneous treatment effects of the Job Training Partnership Act on earnings and the impacts of fertility on family income, where the null hypothesis of homogeneous treatment effects gets rejected in the second case but fails to be rejected in the first application.
We provide a novel inferential framework to estimate the exact affine Stone index (EASI) model, and analyze welfare implications due to price changes caused by taxes. Our inferential framework is based on a non-parametric specification of the stochas tic errors in the EASI incomplete demand system using Dirichlet processes. Our proposal enables to identify consumer clusters due to unobserved preference heterogeneity taking into account, censoring, simultaneous endogeneity and non-linearities. We perform an application based on a tax on electricity consumption in the Colombian economy. Our results suggest that there are four clusters due to unobserved preference heterogeneity; although 95% of our sample belongs to one cluster. This suggests that observable variables describe preferences in a good way under the EASI model in our application. We find that utilities seem to be inelastic normal goods with non-linear Engel curves. Joint predictive distributions indicate that electricity tax generates substitution effects between electricity and other non-utility goods. These distributions as well as Slutsky matrices suggest good model assessment. We find that there is a 95% probability that the equivalent variation as percentage of income of the representative household is between 0.60% to 1.49% given an approximately 1% electricity tariff increase. However, there are heterogeneous effects with higher socioeconomic strata facing more welfare losses on average. This highlights the potential remarkable welfare implications due taxation on inelastic services.
This paper considers $k$-means clustering in the presence of noise. It is known that $k$-means clustering is highly sensitive to noise, and thus noise should be removed to obtain a quality solution. A popular formulation of this problem is called $k$ -means clustering with outliers. The goal of $k$-means clustering with outliers is to discard up to a specified number $z$ of points as noise/outliers and then find a $k$-means solution on the remaining data. The problem has received significant attention, yet current algorithms with theoretical guarantees suffer from either high running time or inherent loss in the solution quality. The main contribution of this paper is two-fold. Firstly, we develop a simple greedy algorithm that has provably strong worst case guarantees. The greedy algorithm adds a simple preprocessing step to remove noise, which can be combined with any $k$-means clustering algorithm. This algorithm gives the first pseudo-approximation-preserving reduction from $k$-means with outliers to $k$-means without outliers. Secondly, we show how to construct a coreset of size $O(k log n)$. When combined with our greedy algorithm, we obtain a scalable, near linear time algorithm. The theoretical contributions are verified experimentally by demonstrating that the algorithm quickly removes noise and obtains a high-quality clustering.
In this paper, we show that the popular K-means clustering problem can equivalently be reformulated as a conic program of polynomial size. The arising convex optimization problem is NP-hard, but amenable to a tractable semidefinite programming (SDP) relaxation that is tighter than the current SDP relaxation schemes in the literature. In contrast to the existing schemes, our proposed SDP formulation gives rise to solutions that can be leveraged to identify the clusters. We devise a new approximation algorithm for K-means clustering that utilizes the improved formulation and empirically illustrate its superiority over the state-of-the-art solution schemes.
We introduce a new $(epsilon_p, delta_p)$-differentially private algorithm for the $k$-means clustering problem. Given a dataset in Euclidean space, the $k$-means clustering problem requires one to find $k$ points in that space such that the sum of s quares of Euclidean distances between each data point and its closest respective point among the $k$ returned is minimised. Although there exist privacy-preserving methods with good theoretical guarantees to solve this problem [Balcan et al., 2017; Kaplan and Stemmer, 2018], in practice it is seen that it is the additive error which dictates the practical performance of these methods. By reducing the problem to a sequence of instances of maximum coverage on a grid, we are able to derive a new method that achieves lower additive error then previous works. For input datasets with cardinality $n$ and diameter $Delta$, our algorithm has an $O(Delta^2 (k log^2 n log(1/delta_p)/epsilon_p + ksqrt{d log(1/delta_p)}/epsilon_p))$ additive error whilst maintaining constant multiplicative error. We conclude with some experiments and find an improvement over previously implemented work for this problem.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا