Notice
Recent Posts
Recent Comments
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | 6 | 7 |
8 | 9 | 10 | 11 | 12 | 13 | 14 |
15 | 16 | 17 | 18 | 19 | 20 | 21 |
22 | 23 | 24 | 25 | 26 | 27 | 28 |
29 | 30 | 31 |
Tags
- Absolute
- AGI
- ai
- AI agents
- AI engineer
- AI researcher
- ajax
- algorithm
- Algorithms
- aliases
- Array 객체
- ASI
- bayes' theorem
- Bit
- Blur
- BOM
- bootstrap
- canva
- challenges
- ChatGPT
Archives
- Today
- In Total
A Joyful AI Research Journey🌳😊
Links to BERT base model (uncased) 본문
The model bert-base-uncased is used because it converts all text to lowercase before processing, ignoring case differences. This is particularly useful when case sensitivity is not important for the task, such as sentiment analysis, where "Happy" and "happy" should be treated the same. The "uncased" version is generally more efficient and performs well when the distinction between uppercase and lowercase letters does not add significant value to the model's performance.
https://huggingface.co/google-bert/bert-base-uncased
https://arxiv.org/abs/1810.04805
728x90
반응형
'🌳AI Projects: NLP🍀✨ > NLP Deep Dive' 카테고리의 다른 글
The use of the * unpacking operator (0) | 2024.08.25 |
---|---|
Links to Python zip() Function (0) | 2024.08.25 |
Naive Bayes versus BERT in Sentiment Analysis (0) | 2024.08.24 |
Links to Text Summarization with BART Model (0) | 2024.08.24 |
Comments