mfleming99 / XMeans

Spark-Scala Implementaion of XMeans

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

XMeans

Spark-Scala Implementaion of XMeans

This is a clustering library that tries to guess how many centroids there are, instead of using a set number like many classical clustering algorithms.

This is my attempt at implementing Dan Pelleg and Andrew Moore's XMeans paper. This implementation does not use the k-d tree discussed in the paper, and uses Spark's RDD to store the datapoints.

Install

This package uses Scala 2.12 and Spark 2.4.5. To add this package to your sbt project, add the following two lines in your build.sbt file.

externalResolvers += "XMeans package" at "https://maven.pkg.github.com/mfleming99/XMeans"
libraryDependencies += "org.mf" %% "XMeans" % "1.2"

Use

The class functions similarly to Apache Spark's KMeans class except there is no need to specify the number of clusters, instead you specify the maximum number of centroids you are willing to compute (Note: The number of centroids found is nearly always lower than the KMax). An example for use would be as follows.

val centroids = new XMeans().setKMax(12).run(dataset)

Now centroids will contain all the centriods that XMeans computed

About

Spark-Scala Implementaion of XMeans

License:MIT License


Languages

Language:Scala 100.0%