Creative Commons Attribution 4.0 International (CC-BY 4.0)Roald EiselenRico KoenAlbertus KrugerJacques van Heerden2023-07-282023-05-012023-07-282023-05-012023-05-01https://hdl.handle.net/20.500.12185/640Contextual masked language model based on the RoBERTa architecture (Liu et al., 2019). The model is trained as a masked language model and not fine-tuned for any downstream process. The model can be used both as a masked LM or as an embedding model to provide real-valued vectorised respresentations of words or string sequences for Sesotho text.Training data: Paragraphs: 535,853; Token count: 17,425,650; Vocab size: 30,000; Embedding dimensions: 768;stNCHLT Sesotho RoBERTa language modelModules235.78MB (Zipped)