Abstract. In this paper, algorithms for the non-negative factorization of sparse matrices and tensors, a popular technology in artificial intelligence in general and in computer linguistics in particular, are described. It is proposed to use the latent Dirichlet distribution to reduce matrices and tensors to block-diagonal form for parallelizing computations and accelerating the non-negative factorization of linguistic matrices and tensors of extremely large dimension. The proposed model also allows the models to be supplemented with new data without having to perform non-negative factorization of the entire super-large tensor anew from the very beginning .
Keywords: artificial intelligence, computational linguistics, parallel computations.