Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- ---
- title: "Cluster Validation"
- author:
- - Craciun Dorina (grupa 405)
- - Tapirdea Alexandru (grupa 405)
- date: "`r format(Sys.time(), '%d %B %Y')`"
- # output:
- # ioslides_presentation:
- # build: yes
- # incremental: yes
- # smaller: yes
- # slidy_presentation:
- # incremental: yes
- output:
- xaringan::moon_reader
- ---
- ```{r setup, include=FALSE}
- knitr::opts_chunk$set(echo = FALSE)
- knitr::opts_chunk$set(warning = FALSE)
- library(xaringan)
- ```
- # Validarea clusterizarii
- Procedura de validare a clusterizarii consta in masurarea performantei rezultatelor obtinute dupa aplicarea algoritmului de clusterizare.
- Inainte de a aplica orice algoritm de clusterizare, se parcurg urmatoarele etape:
- <ol>
- <li> evaluarea tendintei de clusterizare </li>
- <li> determinarea nr. optim de clustere </li>
- <li> clusterizarea propriu-zisa </li>
- <li> validarea clusterizarii </li>
- </ol>
- ???
- Before applying any clustering algorithm to a data set, the first thing to do is to
- assess the clustering tendency. That is, whether applying clustering is suitable for the data. If yes, then how many clusters are there. Next, you can perform hierarchical clustering or partitioning clustering (with a pre-specified number of clusters). Finally, you can use a number of measures, described in this part, to evaluate the goodness of the clustering results.
- ---
- ## Evaluarea tendintei de clusterizare
- Inainte de a aplica orice metoda de clusterizare este important a se decide daca setul respectiv de date prezinta clustere semnificative (non-random structures). In caz afirmativ, se stabileste si nr. de clustere.
- ```{r echo=FALSE}
- # install.packages(c("factoextra", "clustertend", "ggplot2"))
- ```
- ```{r echo=TRUE}
- head(iris, 3)
- ```
- ---
- ```{r echo=TRUE}
- # Iris data set, excluding the column Species
- df <- iris[, -5]
- # Random data generated from the iris data set
- random_df <- apply(df, 2,
- function(x){runif(length(x), min(x), (max(x)))})
- random_df <- as.data.frame(random_df)
- ```
- ```{r echo=FALSE, warning=FALSE, message=FALSE}
- # Standardize the data sets
- df <- iris.scaled <- scale(df)
- random_df <- scale(random_df)
- library("factoextra")
- library("ggplot2")
- set.seed(123)
- # Plot faithful data set
- fviz_pca_ind(prcomp(df), title = "PCA - Iris data",
- habillage = iris$Species, palette = "jco",
- geom = "point", ggtheme = theme_classic(),
- legend = "bottom")
- ```
- ???
- O problema ar fi faptul ca alg. de clusterizare vor intoarce clustere chiar daca acestea nu exista sau nu sunt bine definite. Din acest motiv trebuie sa evaluam setul de date inainte de a aplica un algoritm. Pentru aceasta am ales sa exemplificam pe setul de date iris care arata asa... De asemenea am generat si un set de date random pornind de la setul de date initial si vom aplica diversi alg. pe ambele seturi pentru a observa diferentele.
- ---
- ```{r echo=TRUE}
- # Plot the random df
- fviz_pca_ind(prcomp(random_df), title = "PCA - Random data",
- geom = "point", ggtheme = theme_classic())
- ```
- ---
- ##### Aplicam alg. de clusterizare ierarhica si k-means pe fiecare din cele 2 seturi de date
- ```{r echo=TRUE, out.height="400px"}
- # K-means on iris dataset
- km.res1 <- kmeans(df, 3)
- fviz_cluster(list(data = df, cluster = km.res1$cluster),
- ellipse.type = "norm", geom = "point", stand = FALSE,
- palette = "jco", ggtheme = theme_classic())
- ```
- ---
- ```{r echo=TRUE}
- # Hierarchical clustering on iris dataset
- fviz_dend(hclust(dist(df)), k = 3, k_colors = "jco",
- as.ggplot = TRUE, show_labels = FALSE)
- ```
- ---
- ```{r echo=TRUE}
- #K-means on the random dataset
- km.res2 <- kmeans(random_df, 3)
- fviz_cluster(list(data = random_df, cluster = km.res2$cluster),
- ellipse.type = "norm", geom = "point", stand = FALSE,
- palette = "jco", ggtheme = theme_classic())
- ```
- ---
- ```{r echo=TRUE}
- # Hierarchical clustering on the random dataset
- fviz_dend(hclust(dist(random_df)), k = 3, k_colors = "jco",
- as.ggplot = TRUE, show_labels = FALSE)
- ```
- ???
- Se observa ca alg. incearca o clusterizare chiar si pe datele care sunt distribuite uniform, unde in mod evident nu exista clustere cu semnificatie. Din acest motiv avem nevoie de metode de evaluare a tendintelor de clusterizare.
- ---
- ### Metode de evaluare a tendintelor de clusterizare
- ##### 1. metode statistice (Hopkins statistic)
- ##### 2. metode vizuale (Visual Assessment of cluster Tendency - VAT algorithm)
- <hr/>
- ##### 1. Metode statistice (Hopkins statistic)
- - determina probabilitatea ca un set de date sa fie generat dupa o distributie uniforma
- Fie o multime D un data set real.
- 1. se considera un set de n puncte ( $p_1$ ,..., $p_n$ ) uniform distribuite din D
- 2. pentru fiecare punct $p_i$ se gaseste cel mai apropiat vecin $p_j$ si se calculeaza distanta dintre acestea $x_i$ = dist( $p_i$ , $p_j$ )
- 3. se genereaza un set de date random din distributia uniforma initiala ( $q_1$ ,..., $q_n$ ) cu aceeasi deviatie standard.
- 4. pentru fiecare punct $q_i$ se gaseste cel mai apropiat vecin $q_j$ si se calculeaza distanta dintre acestea $y_i$ = dist( $q_i$ , $q_j$ )
- 5. se calculeaza statistica Hopkins ca fiind suma distantelor din setul de date generat random impartit la suma distantelor din setul initial si cel random
- $H = sum(y)/(sum(x)+sum(y))$
- ---
- ##### 1. Metode statistice (Hopkins statistic)
- - O valoare apropiata de 0.5 inseamna ca setul de date D este uniform distribuit.
- - H0: null hypothesis: D is uniformly ditributed (no meaningful clusters)
- - H1: alternative hypothesis: the data set D is not uniformly distributed (contains meaningful clusters)
- - H value ~ 0.5 => H0
- - H value ~ 0 => H1
- ```{r echo=FALSE}
- library(clustertend)
- set.seed(123)
- ```
- ```{r echo=TRUE}
- # Compute Hopkins statistic for iris dataset
- hopkins(df, n = nrow(df)-1)
- ```
- ```{r echo=FALSE}
- set.seed(123)
- ```
- ```{r echo=TRUE}
- # Compute Hopkins statistic for a random dataset
- hopkins(random_df, n = nrow(random_df)-1)
- ```
- ---
- ##### 2. Metode vizuale
- - VAT algorithm:
- 1. se calculeaza matricea de nesimilaritate (DM) folosind distanta euclidiana
- 2. se ordoneaza DM crescator dupa distanta => ODM
- 3. se afiseaza ODM ca ODI (ordered dissimilarity image)
- ```{r echo=TRUE, out.height="350px"}
- fviz_dist(dist(df), show_labels = FALSE)+labs(title = "Iris data")
- ```
- ---
- ```{r echo=TRUE, out.height="350px"}
- fviz_dist(dist(random_df), show_labels = FALSE)+
- labs(title = "Random data")
- ```
- - Culoarea este proportionala cu valoarea disimilaritatii: cu cat este rosu mai curat dist( $x_i$ , $x_j$ ) = 0, iar cu cat este mai albastru dist( $x_i$ , $x_j$ ) = 1.
- ???
- Rosu: similaritate ridicata, albastru: similaritate scazuta.
- DM confirma ca in setul de date iris avem o structura clusterizata
- Alg. VAT determina tendinta de clusterizare intr-o forma vizuala numarand patratele inchise la culoare de-a lungul diagonalei principale.
- ---
- ### Metode pentru determinarea nr. optim de clustere
- #### 1. Metode directe:
- - prin optimizarea unui criteriu (sums of squares, the average silhouette)
- ##### Elbow method
- ##### Average silhouette method
- #### 2. Metode de testare statistica:
- - constau in compararea datelor cu ipoteza nula
- ##### Gap statistic method
- #### In R avem urmatoarele functii:
- - fviz_nbclust()
- - NbClust()
- ???
- 1. Metode directe
- Elbow method
- - WSS=within-cluster sum of square
- - total-WSS -> the compactness of clustering; to be minimized.
- - Metoda Elbow considera total WSS ca fiind functie de nr. de clustere:
- - Astfel determinat un nr. de clustere, adaugarea altuia nu ar imbunatati rezultatul total-WSS
- 1. se ruleaza alg. de clusterizare pentru diferite nr. de clustere( $k_i$ )
- 2. pentru fiecare $k_i$ , se calculeaza total-WSS
- 3. se ploteaza curba data de wss si nr. $k_i$ de clustere
- 4. locul in care curba isi schimba forma este considerat ca fiind un nr. bun de clustere.
- Average silhouette method
- - Aceasta metoda masoara calitatea clusterizarii.
- - Determina cat de mult fiecare obiect apartine clusterului din care face parte
- - Nr. optim de clustere este cel care maximizeaza media dintre valorile posibile pentru k
- 1. se ruleaza alg. de clusterizare pentru diferite nr. de clustere( $k_i$ )
- 2. pentru fiecare $k_i$ , se calculeaza average silhouette
- 3. se ploteaza curba data de average silhouette si nr. $k_i$ de clustere
- 4. locul in care curba isi schimba forma este considerat ca fiind un nr. bun de clustere.
- 2.Metode de testare statistica
- Gap statistic method
- - Compara variatia total-wss pentru diferite nr. de clustere si returneaza nr. de clustere care maximizeaza aceasta statistica
- 1. se aplica un alg. de clustering pe date folosind nr. diferite de clustere k=1,...,kmax si se calc. total-wss (Wk)
- 2. se genereaza un data set B cu o ditributie uniforma random. Se clusterizeaza si acest set de date si se calc. $Wk_B$ .
- 3. se calculeaza gap statistic ca fiind:
- Gap(k)=1/B*sum(log( $Wk_B$ ))-log(Wk)
- Se calculeaza deviatia standard a statisticilor (sk).
- 4. se alege nr. min. de clustere (k) pentru care:
- Gap(k) >= Gap(k+1) - s(k+1)
- 1.fviz_nbclust() (factoextra)
- It can be used to compute the three different methods [elbow, silhouette and gap statistic] for any partitioning clustering methods
- 2. NbClust() (NbClust)
- It provides 30 indices for determining the relevant number of clusters and proposes to users the best clustering scheme from the different results obtained by varying all combinations of number of clusters, distance measures, and clustering methods.
- It can simultaneously computes all the indices and determine the number of
- clusters in a single function call.
- ---
- ### Example: USArrests data
- ```{r echo=FALSE}
- #install.packages("NbClust")
- library(factoextra)
- library(NbClust)
- # Standardize the data
- df <- scale(USArrests)
- set.seed(123)
- ```
- ```{r echo=TRUE}
- head(df)
- ```
- ---
- ```{r echo=TRUE}
- # Elbow method
- fviz_nbclust(df, kmeans, method = "wss") +
- geom_vline(xintercept = 4, linetype = 2)+
- labs(subtitle = "Elbow method")
- ```
- ---
- ```{r echo=TRUE}
- # Silhouette method
- fviz_nbclust(df, kmeans, method = "silhouette")+
- labs(subtitle = "Silhouette method")
- ```
- ---
- ```{r echo=TRUE, out.height="450px"}
- # Gap statistic
- # nboot = 50 to keep the function speedy, recommended value: nboot= 500
- # verbose = FALSE to hide computing progression.
- fviz_nbclust(df, kmeans, nstart = 25, method = "gap_stat",
- nboot = 50)+labs(subtitle = "Gap statistic method")
- ```
- ???
- NbClust(data = NULL, diss = NULL, distance = "euclidean",
- min.nc = 2, max.nc = 15, method = NULL)
- data: matrix
- diss: dissimilarity matrix to be used. By default, diss=NULL, but if it is replaced by a dissimilarity matrix, distance should be NULL
- distance: the distance measure to be used to compute the dissimilarity matrix (euclidean, manhattan or NULL)
- min.nc, max.nc: minimal and maximal number of clusters, respectively
- method: The cluster analysis method to be used including ward.D, ward.D2, single, complete, average, kmeans and more.
- To compute NbClust() for kmeans, use method = kmeans.
- To compute NbClust() for hierarchical clustering, method should be one of
- c(ward.D, ward.D2, single, complete, average).
- ---
- ```{r echo=FALSE, include=FALSE}
- library("NbClust")
- library("factoextra")
- nb <- NbClust(df, distance = "euclidean", min.nc = 2,
- max.nc = 10, method = "kmeans")
- ```
- ```{r echo=TRUE, out.height="200px"}
- fviz_nbclust(nb)
- ```
- ---
- ### Cluster Validation Statistics
- #### 1. Internal cluster validation
- - se utilizeaza doar informatii interne din procedeul de clusterizare
- - estimeaza nr. de clustere si alg. cel mai potrivit
- - Coeficientul silhouette
- - Indexul Dunn
- ---
- #### 2. External cluster validation
- - se compara rezultatele unei analize de clusterizare cu un rezultat deja cunoscut
- - se utilizeaza pentru a alege cel mai bun algoritm pentru respectivul set de date
- - Rand index
- - Meila's variation index VI
- #### 3. Relative cluster validation
- - evalueaza clusterizarea prin variatia parametrilor aplicati aceluiasi alg.
- - se utilizeaza pentru a det. nr. optim de clustere
- ???
- Coeficientul silhouette
- -masoara cat de bine este clusterizat un set de date si estimeaza distanta medie dintre clustere
- -ploteaza distanta dintre fiecare punct al unui cluster si punctele din clusterele vecine
- Dunn index este maxim cand datele au fost clusterizate optim
- ---
- ```{r echo=F}
- library(factoextra)
- library(fpc)
- library(NbClust)
- ### Data preparation
- # Excluding the column "Species" at position 5
- df <- iris[, -5]
- # Standardize
- df <- scale(df)
- ```
- ##### Pentru exemplificare vom folosi datasetul IRIS
- ```{r eval=F}
- # K-means clustering
- km.res <- eclust(df, "kmeans", k = 3, nstart = 25, graph = FALSE)
- # Visualize k-means clusters
- fviz_cluster(km.res, geom = "point", ellipse.type = "norm",
- palette = "jco", ggtheme = theme_minimal())
- # Hierarchical clustering
- hc.res <- eclust(df, "hclust", k = 3, hc_metric = "euclidean",hc_method = "ward.D2", graph = FALSE)
- # Visualize dendrograms
- fviz_dend(hc.res, show_labels = FALSE,
- palette = "jco", as.ggplot = TRUE)
- ```
- ```{r echo=T, eval=T, fig.show='hold', out.width="50%"}
- # K-means clustering & visualize
- km.res <- eclust(df, "kmeans", k = 3, nstart = 25, graph = FALSE)
- fviz_cluster(km.res, geom = "point", ellipse.type = "norm",
- palette = "jco", ggtheme = theme_minimal())
- # Hierarchical clustering & visualize
- hc.res <- eclust(df, "hclust", k = 3, hc_metric = "euclidean",
- hc_method = "ward.D2", graph = FALSE)
- fviz_dend(hc.res, show_labels = FALSE, as.ggplot = TRUE, palette = "jco")
- ```
- ---
- ```{r echo=T}
- # Silhouette plot
- # fviz_silhouette(km.res, palette = "jco", ggtheme = theme_classic())
- # Silhouette information
- silinfo <- km.res$silinfo
- names(silinfo)
- # Silhouette widths of each observation
- head(silinfo$widths[, 1:3], 10)
- ```
- ---
- ```{r echo=T}
- # Average silhouette width of each cluster
- silinfo$clus.avg.widths
- # The total average (mean of all individual silhouette widths)
- silinfo$avg.width
- # The size of each clusters
- km.res$size
- # Silhouette width of observation
- sil <- km.res$silinfo$widths[, 1:3]
- # Objects with negative silhouette
- neg_sil_index <- which(sil[, 'sil_width'] < 0)
- sil[neg_sil_index, , drop = FALSE]
- ```
- ???
- cluster.stats(d=NULL, clustering, al.clustering=NULL)
- - d: a distance object between cases as generated by the dist() function
- - clustering: vector containing the cluster number of each observation
- - alt.clustering: vector such as for clustering, indicating an alternative clustering
- The function cluster.stats() returns a list containing many components useful for
- analyzing the intrinsic characteristics of a clustering:
- - cluster.number: number of clusters
- - cluster.size: vector containing the number of points in each cluster
- - average.distance, median.distance: vector containing the cluster-wise within average/ median distances
- - average.between: average distance between clusters. We want it to be as large as possible
- - average.within: average distance within clusters. We want it to be as small as
- possible
- - clus.avg.silwidths: vector of cluster average silhouette widths. Recall that, the
- silhouette width is also an estimate of the average distance between clusters.
- Its value is comprised between 1 and -1 with a value of 1 indicating a very good
- cluster.
- - within.cluster.ss: a generalization of the within clusters sum of squares (kmeans objective function), which is obtained if d is a Euclidean distance matrix.
- - dunn, dunn2: Dunn index
- - corrected.rand, vi: Two indexes to assess the similarity of two clustering: the
- corrected Rand index and Meila 's VI
- ---
- ### Dunn Index
- ```{r echo=T}
- # Statistics for k-means clustering
- km_stats <- cluster.stats(dist(df), km.res$cluster)
- # Dunn index
- km_stats$dunn
- km_stats
- ```
- ---
- ### External cluster validation
- ```{r echo=T}
- table(iris$Species, km.res$cluster)
- ```
- - All setosa species (n = 50) has been classified in cluster 1
- - A large number of versicor species (n = 39 ) has been classified in cluster 3.
- Some of them ( n = 11) have been classified in cluster 2.
- - A large number of virginica species (n = 36 ) has been classified in cluster 2.
- Some of them (n = 14) have been classified in cluster 3.
- ```{r echo=T}
- # Compute cluster stats
- species <- as.numeric(iris$Species)
- clust_stats <- cluster.stats(d = dist(df), species, km.res$cluster)
- # Corrected Rand index
- # Masoara similaritatea dintre 2 partitii (clusterizare si impartirea pe specii) si are # valori din intervalul [-1, 1]. Cu cat val. e mai apropiata de 1, cu atat este mai buna clusterizarea.
- clust_stats$corrected.rand
- ```
- ---
- ```{r echo=T}
- # VI
- clust_stats$vi
- ```
- Aceeasi analiza poate fi facuta si aplicand alti algoritmi de clusterizare precum: clusterizarea ierarhica sau PAM (Partitioning Around Medoids).
- ##### PAM
- ```{r echo=T}
- # Agreement between species and pam clusters
- pam.res <- eclust(df, "pam", k = 3, graph = FALSE)
- table(iris$Species, pam.res$cluster)
- cluster.stats(d = dist(iris.scaled),
- species, pam.res$cluster)$vi
- ```
- ---
- ##### HC
- ```{r echo=T}
- # Agreement between species and HC clusters
- res.hc <- eclust(df, "hclust", k = 3, graph = FALSE)
- table(iris$Species, res.hc$cluster)
- cluster.stats(d = dist(iris.scaled),
- species, res.hc$cluster)$vi
- ```
- ---
- ## Determinarea celui mai bun algoritm pentru clusterizare
- Pentru a compara simultan diferiti algoritmi de clusterizare putem sa folosim clValid.
- ClValid compara algoritmii de clusterizare folosind 2 masuri diferite pentru validarea rezultatelor:
- 1. Internal measures
- 2. Stability measures
- ### 1. Internal measures
- Folosesc informatii de ordin intrinsec, precum:
- - Conectivitatea
- - Coeficientul silhouette
- - Dunn index
- ---
- ### 2. Stability measures
- Evalueaza consistenta clusterului final prin compararea cu alte clustere obtinute prin eliminarea cate unei coloane.
- Stability measures evalueaza:
- - APN(The average proportion of non-overlap)--> valori intre [0,1], unde valorile scazute indica o consistenta ridicata. APN reprezinta media observatiilor care nu au fost plasate IN ACELASI CLUSTER de catre metoda de clusterizare bazata pe intreg setul de date si cele bazate pe datele din care s-a eliminat pe rand cate o coloana.
- - AD(The average distance)--> valori intre [0,∞), unde valorile scazute sunt de dorit. AD distanta medie dintre elementele plasate IN ACELASI CLUSTER, in ambele cazuri. (date complete si date din care s-a eliminat o coloana)
- - ADM(The average distance between means) valori intre [0,1], cu valori scazute de dorit. ADM reprezinta media distantelor dintre centrele clusterelor pentru elementele plasate IN ACELASI CLUSTER (in ambele cazuri)
- - FOM (The figure of merit) --> valori intre [0,1], unde valorile scazute indica o consistenta crescuta. FOM este varianta medie a coloanei eliminate dintr-un cluster, unde clusterul este obtinut pe baza coloanelor ramase.
- ---
- ```{r echo=T}
- library(clValid)
- # Iris data set:
- # - Remove Species column and scale
- df <- scale(iris[, -5])
- # Compute clValid
- clmethods <- c("hierarchical","kmeans","pam")
- intern <- clValid(df, nClust = 2:6,
- clMethods = clmethods, validation = "internal")
- # Summary
- summary(intern)
- ```
- ---
- ## Cod R pentru functia clValid
- ```{r echo=T, eval=F}
- clValid(obj, nClust, clMethods = "hierarchical",
- validation = "stability", maxitems = 600,
- metric = "euclidean", method = "average")
- ```
- Parametrii:
- - obj: datele pentru care se face clusterizarea.
- - nClust : numarul de clustere de evaluat
- - clMethods: metoda de clusterizare folosita
- - validation: tipul de validare folosit(internal, stability sau biological)
- - maxitems: numarul maxim de elemente care pot sa fie clusterizare
- - metric: metrica folosita ("euclidean","correlation","manhattan")
- - method: [doar pentru hierarchical clustering], metoda de aglomerare folosita ("ward","single","complete")
- ---
- ### The stability measures can be computed as follow:
- ```{r echo=T, eval=T}
- # Stability measures
- clmethods <- c("hierarchical","kmeans","pam")
- stab <- clValid(df, nClust = 2:6, clMethods = clmethods,
- validation = "stability")
- # Display only optimal Scores
- optimalScores(stab)
- ```
- ---
- #Computing P-value for Hierarchical Clustering
- Se foloseste pvclust si se aplica urmatorul algoritm:
- 1. Generated thousands of bootstrap samples by randomly sampling elements of
- the data.
- 2. Compute hierarchical clustering on each bootstrap copy
- 3. For each cluster:
- - compute the bootstrap probability (BP) value which corresponds to the
- frequency that the cluster is identified in bootstrap copies.
- - Compute the approximately unbiased (AU) probability values (p-values) by
- multiscale bootstrap resampling
- ---
- # Functia pvclust
- ```{r echo=T, eval=F}
- pvclust(data, method.hclust = "average",
- method.dist = "correlation", nboot = 1000)
- ```
- Functia pvclust realizeaza clusterizarea pe coloanele setului de date.
- Parametrii:
- - data: matricea datelor folosite
- - method.hclust: metoda de aglomerare folosita. Valori posibile
- + average --> default
- + ward
- + single
- + complete
- + mcquitty
- + median
- + centroid
- - method.dist: masura de distanta folosita:
- + correlation
- + uncentered,
- + abscor
- + euclidean
- + manhattan
- - nboot: numarul de replicari la initializare(bootstrap replications). Default 1000
- - iseed: un numar intreg folosit pentru generarea de random seeds. Trebuie folosit daca vrem sa putem reproduce ulterior rezultatele obtinute.
- # Exemplu de utilizare pvclust()
- ```{r echo=T, eval=F}
- library(pvclust)
- set.seed(123)
- res.pv <- pvclust(df, method.dist="cor",
- method.hclust="average", nboot = 10)
- library(pvclust)
- set.seed(123)
- res.pv <- pvclust(df, method.dist="cor",
- # Default plot
- plot(res.pv, hang = -1, cex = 0.5)
- pvrect(res.pv)
- method.hclust="average", nboot = 10)
- ```
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement