Institutional-Repository, University of Moratuwa.  

Neural mixture models with expectation-maximization for end-to-end deep clustering

Show simple item record

dc.contributor.author Tissera, D
dc.contributor.author Vithanage, K
dc.contributor.author Wijesinghe, R
dc.contributor.author Xavier, A
dc.contributor.author Jayasena, S
dc.contributor.author Fernando, S
dc.contributor.author Rodrigo, R
dc.date.accessioned 2023-06-21T08:53:56Z
dc.date.available 2023-06-21T08:53:56Z
dc.date.issued 2022
dc.identifier.citation Tissera, D., Vithanage, K., Wijesinghe, R., Xavier, A., Jayasena, S., Fernando, S., & Rodrigo, R. (2022). Neural mixture models with expectation-maximization for end-to-end deep clustering. Neurocomputing, 505, 249–262. https://doi.org/10.1016/j.neucom.2022.07.017 en_US
dc.identifier.issn 0925-2312 en_US
dc.identifier.uri http://dl.lib.uom.lk/handle/123/21139
dc.description.abstract Any clustering algorithm must synchronously learn to model the clusters and allocate data to those clusters in the absence of labels. Mixture model-based methods model clusters with pre-defined statistical distributions and allocate data to those clusters based on the cluster likelihoods. They iteratively refine those distribution parameters and member assignments following the Expectation-Maximization (EM) algorithm. However, the cluster representability of such hand-designed distributions that employ a limited amount of parameters is not adequate for most real-world clustering tasks. In this paper, we realize mixture model-based clustering with a neural network where the final layer neurons, with the aid of an additional transformation, approximate cluster distribution outputs. The network parameters pose as the parameters of those distributions. The result is an elegant, much-generalized representation of clusters than a restricted mixture of hand-designed distributions. We train the network end-to-end via batch-wise EM iterations where the forward pass acts as the E-step and the backward pass acts as the M-step. In image clustering, the mixture-based EM objective can be used as the clustering objective along with existing representation learning methods. In particular, we show that when mixture-EM optimization is fused with consistency optimization, it improves the sole consistency optimization performance in clustering. Our trained networks outperform single-stage deep clustering methods that still depend on k-means, with unsupervised classification accuracy of 63.8% in STL10, 58% in CIFAR10, 25.9% in CIFAR100, and 98.9% in MNIST. en_US
dc.language.iso en_US en_US
dc.publisher Elsevier en_US
dc.subject Deep Clustering en_US
dc.subject Mixture Models en_US
dc.subject Expectation-Maximization en_US
dc.title Neural mixture models with expectation-maximization for end-to-end deep clustering en_US
dc.type Article-Full-text en_US
dc.identifier.year 2022 en_US
dc.identifier.journal Neurocomputing en_US
dc.identifier.volume 505 en_US
dc.identifier.database ScienceDirect en_US
dc.identifier.pgnos 249-262 en_US
dc.identifier.doi https://doi.org/10.1016/j.neucom.2022.07.017 en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record