- π Model accuracy assessment, retraining, regularization (XGBClassifier, roc_curve, precision_recall_curve)
- π Shelter Animal Outcomes (Kaggle competition)
- π Polynomial Regression and Proximity Metrics (Levenshtein distance)
- π Polynomial Regression and Proximity Metrics (kNN digits)
- π Clustering Algorithms (KMeans, DBSCAN, adjusted_rand_score, silhouette_score)
- π Wine quality assessment (Analysis, LinearRegression, RandomForestRegressor)
- π Iris Dataset (RandomForest, LogisticRegression)
- π Adult Dataset (SVM, LogisticRegression)
- π KNeighborsClassifier
- π Titanic Dataset
- π Titanic Dataset (DecisionTreeClassifier, GridSearchCV)
- π Clustering inclass
- π Ensemble (models)
- π Ensemble (StackingCVRegressor, GradientBoostingRegressor, BaggingRegressor, AdaBoostRegressor)
- π Moons circles (KMeans, AgglomerativeClustering, DBSCAN, AffinityPropagation)
- π Loss Functions and Optimization (theory)
- π Athletes classifier (Model accuracy assessment, retraining, regularization)
- π Loss Functions and Optimization (gradient descent, nesterov momentum, rmsprop)
- π Improved model quality. Advanced classification algorithms (test)
- π Improved model quality, Advanced algorithms (GridSearchCV, best_params, best_score, best_estimator)
- π Collaborative filtering (part 1, package Surprise)
- π Machine Learning Laboratory work on modeling
- π Introduction and classification of recommendation systems
- π Content Based Recommendations (TfidfTransformer, CountVectorizer, TfidfVectorizer)
- π Content Based Recommendations (preprocessing, StackingCVRegressor)
- π Collaborative filtering (part 2, package Surprise, KNNBasic, KNNWithMeans, SVD, SVDpp, accuracy)
- π Recommendations based on hidden factors (implicit, sparse)
- π Hybrid Recommender Systems
- π Hybrid Recommender Systems (surprise, NearestNeighbors)
- π Hybrid Recommender Systems (lightfm)
- π Introducing Time Series (candlestick_ohlc, mpl_finance, visualization)
- π TS Practice (adfuller, boxcox)
- π Basic Full Analysis Example (adfuller, sm, smt, boxcox)
- π Elementary time series analysis methods (MA, WMA, EMA, DEMA, TEMA)
- π ARIMA and GARCH models, predicting values based on them (ARMA, ARIMA, ARCH, GARCH)
- π Markovsky random processes (construction of Markov models for time series, predicting values)
- π Time series debugging and anomaly detection
- π Bitcoin Historical USD Price (TS, NN, Keras)
- π Time series DJIA 30 Stock (RNN, LSTM, GRU)
- π Explorative data analysis on banking transaction
- π Analysis and Hypotheses to understand why employees leave the company
- π Using pandas and numpy to clean data
- π Feature Selection (LASSO(L1), Trees)
- π Backward selection
- π Data Quality Problem
- π train_test_split, score (RSS, RSE, R^2)
- π Preprocessing
- π Class imbalanced (Under_Sampling, Over_Sampling)
- π Explorative data analysis
- π Technics data analysis (quantile, z-score, IQR)
- π Support module (SelectKBest, ExtraTreesClassifier, mutual_info_classif)
- π Data preprocessing lab (Score_metrics)