
indobenchmark/indobert-base-p2
举报
让我们一起共建文明社区!您的反馈至关重要!
IndoBERT is a state-of-the-art language model for Indonesian based on the BERT model. The pretrained model is trained using a masked language modeling (MLM) objective and next sentence prediction (NSP) objective.
Model | #params | Arch. | Training data |
---|---|---|---|
indobenchmark/indobert-base-p1 | 124.5M | Base | Indo4B (23.43 GB of text) |
indobenchmark/indobert-base-p2 | 124.5M | Base | Indo4B (23.43 GB of text) |
indobenchmark/indobert-large-p1 | 335.2M | Large | Indo4B (23.43 GB of text) |
indobenchmark/indobert-large-p2 | 335.2M | Large | Indo4B (23.43 GB of text) |
indobenchmark/indobert-lite-base-p1 | 11.7M | Base | Indo4B (23.43 GB of text) |
indobenchmark/indobert-lite-base-p2 | 11.7M | Base | Indo4B (23.43 GB of text) |
indobenchmark/indobert-lite-large-p1 | 17.7M | Large | Indo4B (23.43 GB of text) |
indobenchmark/indobert-lite-large-p2 | 17.7M | Large | Indo4B (23.43 GB of text) |
本站Ai工具导航提供的“indobenchmark/indobert-base-p2”来源于网络,不保证外部链接的准确性和完整性,同时,对于该外部链接的指向,不由“Ai工具导航”实际控制,在“2025-10-06 02:58:19”收录时,该网页上的内容,都属于合规合法,后期网页的内容如出现违规,可以直接联系网站管理员进行删除,“Ai工具导航”不承担任何责任。
扫一扫二维码关注我们的微信公众号