Data clustering is a fundamental unsupervised machine learning problem, and the most widely used method of data clustering over the decades is k-means. Recently, a newly proposed algorithm called KindAP, based on the idea of subspace matching and a semi-convex relaxation scheme, outperforms k-means in many aspects, such as no random replication and insensitivity to initialization. Unlike k-means, empirical evidence suggests that KindAP can correctly identify well-separated globular clusters even when the number of clusters is large, but a rigorous theoretical analysis is necessary. This study improves the algorithm design and establishes the first-step theory for KindAP. KindAP is actually a two-layered alternating projection procedure applied to two non-convex sets. The inner loop solves an intermediate model via a semi-convex relaxation scheme that relaxes one more complicated non-convex set while keeping the other intact. We first derive a convergence result for this inner loop. Then under the ``ideal data'' assumption where $n$ data points are exactly located at $k$ positions, we prove that KindAP converges globally to the global minimum with the help of outer loop. Further work is ongoing to extend this analysis from the ideal data case to more general cases.