【机器学习】27 Latent variable models for discrete data
本章目录
27 Latent variable models for discrete data 945
27.1 Introduction 945
27.2 Distributed state LVMs for discrete data 946
27.2.1 Mixture models 946
27.2.2 Exponential family PCA 947
27.2.3 LDA and mPCA 948
27.2.4 GaP model and non-negative matrix factorization 949
27.3 Latent Dirichlet allocation (LDA) 950
27.3.1 Basics 950
27.3.2 Unsupervised discovery of topics 953
27.3.3 Quantitatively evaluating LDA as a language model 953
27.3.4 Fitting using (collapsed) Gibbs sampling 955
27.3.5 Example 956
27.3.6 Fitting using batch variational inference 957
27.3.7 Fitting using online variational inference 959
27.3.8 Determining the number of topics 960
27.4 Extensions of LDA 961
27.4.1 Correlated topic model 961
27.4.2 Dynamic topic model 962
27.4.3 LDA-HMM 963
27.4.4 Supervised LDA 967
27.5 LVMs for graph-structured data 970
27.5.1 Stochastic block model 971
27.5.2 Mixed membership stochastic block model 973
27.5.3 Relational topic model 974
27.6 LVMs for relational data 975
27.6.1 Infinite relational model 976
27.6.2 Probabilistic matrix factorization for collaborative filtering 979
27.7 Restricted Boltzmann machines (RBMs) 983
27.7.1 Varieties of RBMs 985
27.7.2 Learning RBMs 987
27.7.3 Applications of RBMs 991
github下载链接:https://github.com/916718212/Machine-Learning-A-Probabilistic-Perspective-.git