- Building applications for clustering using Apache Spark (Machine Learning) and Scala.
- Scala: 2.12.12
- Apache Spark: 3.0.0
- Maven Scala Plugin: 2.15.2
- OpenJDK: Java 1.8
- SBT 1.2.1
- Windows binaries for Hadoop versions: winutils
- Apache Hive Configuration Properties
- IntelliJ IDEA Ultimate
- SparkSession, clustering, BisectingKMeans, GaussianMixture, KMeans, LDA, ClusteringEvaluator, transform, ...