网站介绍
CANINE-s (CANINE pre-trained with subword loss)
Pretrained CANINE model on 104 languages using a masked language modeling (MLM) objective. It was introduced in the paper CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation and first released in this repository.
What’s special about CANINE is that it doesn’t require an explicit tokenizer (such as WordPiece or SentencePiece) as other models like BERT and RoBERTa. Instead, it directly operates at a character level: each character is turned into its Unicode code point.
This means that input processing is trivial and can typically be accomplished as:
input_ids = [ord(char) for char in text]
The ord() function is part of Python, and turns each character into its Unicode code point.
Disclaimer: The team releasing CANINE did not write a model card for this model so this model card has been written by the Hugging Face team.
Model description
CANINE is a transformers model pretrained on a large corpus of multilingual data in a self-supervised fashion, similar to BERT. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives:
- Masked language modeling (MLM): one randomly masks part of the inputs, which the model needs to predict. This model (CANINE-s) is trained with a subword loss, meaning that the model needs to predict the identities of subword tokens, while taking characters as input. By reading characters yet predicting subword tokens, the hard token boundary constraint found in other models such as BERT is turned into a soft inductive bias in CANINE.
- Next sentence prediction (NSP): the model concatenates two sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not.
This way, the model learns an inner representation of multiple languages that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the CANINE model as inputs.
Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it’s mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at models like GPT2.
How to use
Here is how to use this model:
from transformers import CanineTokenizer, CanineModel
model = CanineModel.from_pretrained('google/canine-s')
tokenizer = CanineTokenizer.from_pretrained('google/canine-s')
inputs = ["Life is like a box of chocolates.", "You never know what you gonna get."]
encoding = tokenizer(inputs, padding="longest", truncation=True, return_tensors="pt")
outputs = model(**encoding) # forward pass
pooled_output = outputs.pooler_output
sequence_output = outputs.last_hidden_state
Training data
The CANINE model was pretrained on on the multilingual Wikipedia data of mBERT, which includes 104 languages.
BibTeX entry and citation info
@article{DBLP:journals/corr/abs-2103-06874,
author = {Jonathan H. Clark and
Dan Garrette and
Iulia Turc and
John Wieting},
title = {{CANINE:} Pre-training an Efficient Tokenization-Free Encoder for
Language Representation},
journal = {CoRR},
volume = {abs/2103.06874},
year = {2021},
url = {https://arxiv.org/abs/2103.06874},
archivePrefix = {arXiv},
eprint = {2103.06874},
timestamp = {Tue, 16 Mar 2021 11:26:59 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2103-06874.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
本站Ai工具导航提供的“google/canine-s”来源于网络,不保证外部链接的准确性和完整性,同时,对于该外部链接的指向,不由“Ai工具导航”实际控制,在“2025-10-05 21:03:44”收录时,该网页上的内容,都属于合规合法,后期网页的内容如出现违规,可以直接联系网站管理员进行删除,“Ai工具导航”不承担任何责任。
流量统计
- 7天
- 30天
- 90天
- 365天
猜你喜欢
风云汇
风云汇官网沈阳金铠建筑科技股份有限公司(证券简称:金铠建科;股票代码:831141)成立于2009年2月,注册资本186...DragGAN(GitHub)
DragGAN的在线演示和非官方实现-“拖拽你的GAN:生成图像歧管上的交互式基于点的操作”(DragGAN 全功能实现...中国通信工业协会
中国通信工业协会 (CCIA,以下简称协会)是一九九一年七月经民政部注册登记,由国内从事通信设备和系统及相关的配套设备、...中国通信学会
中国通信学会(China Institute of Communications,CIC,以下简称“学会”)成立于197...ZAKER-AI智能客服
打造个性的智能客服自定义配置,打造专属智能客服支持上传海量材料,训练智慧AI客服智能理解上下文对话,增强对话的精准度覆盖...稿定AI-搞定AI!
稿定AI官网提供多样化的、可控的图像生成算法,一张图,一句话-协助创意更快落地!...WeShop商拍
AI商拍释放产品本身的魅力在WeShop,你再也不会受到模特/经纪/摄影/后期/场租/机酒的限制,省时、省力、又省钱。鼠...AI智绘-服装领域CHATGPT
AI智绘五大核心优势巨量生产,降本增效:凭借强大的算力支撑,大幅提升设计产能灵感迸发,拒绝平庸:无限创作空间,设计师再也...水母快写
水母快写是一款基于人工智能技术的文章批量生成工具。它能够快速地生成高质量、原创性的文章,以满足用户的日常写作需求。不同于...问鸭
1.完全免费,不限使用次数;2.不同设备间数据自动同步;3.流动式输出,响应速度极快。生活、学习、工作问题都可以问鸭!...seowriting.ai
用于一键式SEO优化文章,博客文章和会员内容的AI写作工具。提供48种语言版本,带有图片的文章自动发布到WordPres...sayakpaul/glpn-nyu-finetuned-diode-221121-063504
glpn-nyu-finetuned-diode-221121-063504This model is a fine-t...
- 关注我们
-
扫一扫二维码关注我们的微信公众号
- 网址推荐
- 热门标签
-
- 游戏(4428)
- 街机游戏合集(4329)
- 街机游戏(4329)
- 在线游戏集合(4329)
- 街机在线(4329)
- nes合集游戏(4328)
- 在线小游戏网站(4328)
- 游戏榜(4328)
- 红白机游戏盒(4328)
- 小霸王游戏(4328)
- GBA(1796)
- 街机(555)
- 动作冒险(400)
- 青檬花园(374)
- 角色扮演(354)
- 动作(341)
- 汉化(332)
- SFC(328)
- 运动比赛(321)
- 深度导航(309)
- 小游戏(302)
- 射击(292)
- AIGC导航(277)
- 国内精选服务商(255)
- 中文(245)
- 冒险(239)
- 工具达人(239)
- AI写作工具(232)
- 平台(219)
- 格斗(212)
- 视频(198)
- 翻译(186)
- 人工智能(172)
- 动漫(159)
- 的(153)
- Video(152)
- 数字人(151)
- ppt(137)
- 数据分析(137)
- 文生图(134)
- logo(133)
- 飞行射击(128)
- AI智能体(126)
- 赛车(124)
- 一起用AI(123)
- 电商(119)
- 其他(119)
- 思维导图(113)
- OpenI(113)
- Code(105)