Repository logoRepository logo
 

NCHLT Sepedi RoBERTa language model

dc.contact.emailRoald.Eiselen@nwu.ac.za
dc.contact.nameRoald Eiselen
dc.contributor.authorRoald Eiselen
dc.contributor.otherRico Koen
dc.contributor.otherAlbertus Kruger
dc.contributor.otherJacques van Heerden
dc.date.accessioned2023-07-28T08:11:37Z
dc.date.accessioned2023-05-01
dc.date.available2023-07-28T08:11:37Z
dc.date.available2023-05-01
dc.date.issued2023-05-01
dc.descriptionContextual masked language model based on the RoBERTa architecture (Liu et al., 2019). The model is trained as a masked language model and not fine-tuned for any downstream process. The model can be used both as a masked LM or as an embedding model to provide real-valued vectorised respresentations of words or string sequences for Sepedi text.
dc.format.extentTraining data: Paragraphs: 292,594; Token count: 8,908,709; Vocab size: 30,000; Embedding dimensions: 768;
dc.format.size236.03MB (Zipped)
dc.identifier.urihttps://hdl.handle.net/20.500.12185/638
dc.language.isonso
dc.languagesSepedi
dc.media.categoryLanguage model
dc.media.typeText
dc.projectNCHLT Text IV
dc.publisherNorth-West University; Centre for Text Technology (CTexT)
dc.rights.licenseCreative Commons Attribution 4.0 International (CC-BY 4.0)
dc.software.requirementsPython
dc.sourceWeb
dc.sourceGovernment Documents
dc.titleNCHLT Sepedi RoBERTa language model
dc.typeModules

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
nso.RoBERTa.tar.gz
Size:
236.03 MB
Format:
Unknown data format