New Language Models for Serbian
Abstract
The paper will briefly present the development history of transformer-based language models for the Serbian language. Several new models for text generation and vectorization, trained on the resources of the Society for Language Resources and Technologies, will also be presented. Ten selected vectorization models for Serbian, including the two new ones, will be compared on four natural language processing tasks. Paper will be analyze which models are the best for each selected task, how does their size and the size of their training sets affect the performance on those tasks, and what is the optimal setting to train the best language models for the Serbian language.

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.