일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | |||
5 | 6 | 7 | 8 | 9 | 10 | 11 |
12 | 13 | 14 | 15 | 16 | 17 | 18 |
19 | 20 | 21 | 22 | 23 | 24 | 25 |
26 | 27 | 28 | 29 | 30 | 31 |
- Absolute
- AGI
- ai
- AI agents
- AI engineer
- AI researcher
- ajax
- algorithm
- Algorithms
- aliases
- Array 객체
- ASI
- bayes' theorem
- Bit
- Blur
- BOM
- bootstrap
- canva
- challenges
- ChatGPT
- CLASS
- COALESCE
- combinatorics
- congruent modulo
- Cookie
- css
- CSS3
- deep learning
- div
- divmod
- DOM
- DOM 변경
- El
- Emotion
- figma
- Today
- In Total
목록🌳AI Projects: NLP & NMT🌎🍀✨ (18)
A Joyful AI Research Journey🌳😊
https://github.com/yjyuwisely/MovieSense_NLP
Positive Paragraph: The film was an exhilarating journey from beginning to end. Not only was the plot engaging, but the characters were also crafted with such depth and nuance that you couldn't help but root for them. The cinematography painted a visual tapestry that was nothing short of breathtaking, drawing the audience into each scene. The soundtrack, with its sublime melodies, further elevat..
In the expression ∣{d∈D:t∈d}∣ {}: denotes a set. d∈D: means "document d is in the set D" (i.e., d is one of the documents in the corpus D). t∈d means "term t is in document d" (i.e., the term t appears in the document d). : can be read as "such that". So, {d∈D:t∈d} describes the set of all documents d in the corpus D such that the term t appears in d. In plain English, it represents the set of a..
To determine P(J∣F,I) the probability Jill Stein spoke the words 'freedom' and 'immigration', we'll apply Bayes' Theorem: P(J∣F,I) =P(J)×P(F∣J)×P(I∣J) / P(F,I) Where: P(J) is the prior probability (the overall likelihood of Jill Stein giving a speech). In our case, P(J)=0.5P(J)=0.5. P(F∣J) and P(I∣J) are the likelihoods. These represent the probabilities of Jill Stein saying the words 'freedom' ..
Bayesian inference is a method of statistical analysis that allows us to update probability estimates as new data arrives. In the realm of Natural Language Processing (NLP), it is often used in spam detection, sentiment analysis, and more. Let's explore the initial steps of preprocessing text data for Bayesian inference. 1. Convert Text to Lowercase: To ensure consistency, we convert all text da..
When working with data in Python, the pandas library is a vital tool. However, a common hiccup new users face is the "NameError" related to its commonly used alias 'pd'. Let's understand and resolve this error. The message "NameError: name 'pd' is not defined" indicates that the pandas library, commonly aliased as "pd", hasn't been imported. The solution is straightforward. You need to ensure th..
In the context of the Naive Bayes classifier, probability normalization plays a vital role, especially when we want our probabilities to reflect the true likelihood of an event occurring in comparison to other events. When predicting class labels using the Naive Bayes formula, we compute the product of feature probabilities for each class. However, these products do not sum up to 1 across classe..
Let's break down the regex pattern \b\w+\b and explain it with examples. 1. \w The \w metacharacter matches any word character, which is equivalent to the character set [a-zA-Z0-9_]. This includes: Uppercase letters: A to Z Lowercase letters: a to z Digits: 0 to 9 Underscore: _ 2. \w+ The + is a quantifier that means "one or more" of the preceding character or group. So, \w+ matches one or more ..
ChatGPT, response to “Is it better to first understand seq2seq models in-depth and then use high-level libraries like Hugging Face or TensorFlow? Is this approach similar to studying theory first and then using a library?,” August 27, 2023, OpenAI. Yes, your understanding is on point. Let's delve into why this sequential approach of starting with seq2seq and then moving on to modern libraries li..
ChatGPT, response to “Can I use a seq2seq model for NMT using Hugging Face, Keras, or TensorFlow?” August 27, 2023, OpenAI. Yes, the seq2seq (sequence-to-sequence) model is a foundational architecture for NMT (Neural Machine Translation), and you can implement and train it using any of the mentioned frameworks: Hugging Face's Transformers, Keras, or TensorFlow. Here's a brief overview of how you..