Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
freem
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Openai/6916afae-6ae0-800a-b91e-f4c494caf364
(section)
Add languages
Page
Discussion
English
Read
Edit
Edit source
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
Edit source
View history
General
What links here
Related changes
Special pages
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== gpt-5-thinking == === User: Actually, one last ground rule. === Actually, one last ground rule. You mentioned no vocabulary drilling unless I ask for it. Let's make vocabulary drilling standard for that upload. Quiz on the new vocabulary and reinforce the old. You should keep quizzing me until I clearly know the words. On errors, keep asking that word mixed in with others until I clearly know it (getting it correct twice after getting it wrong seems like a good bar to pass). As far as quizzing me in general, focus on the new words more and sprinkle in older words. Be statistical about it. What sample size should you need to make so that you are 95% confident I know all the new words? You may assume word knowledge independence for your confidence estimates. Sprinkling in 15-20 older words would be good too and think about your confidence in me there as well. === Assistant: Love this rule; it fixes exactly what your old setup couldn’t. === Love this rule; it fixes exactly what your old setup couldn’t. Let’s wire it in properly. Here’s how I’ll run vocab from now on (including the stats bit you asked for): ==== For each chapter’s vocab upload: ==== * I will quiz you by default. * Mix: - English → Japanese - Japanese → English - Occasional fill-in / short phrase. * I’ll focus heavily on the new words. * I’ll sprinkle in ~15–20 review words from earlier chapters/sessions. * I’ll track (within this chat) which words you: - nailed, - hesitated on, - missed. ===== For any word you miss: ===== * That word goes into a “weak” bucket. * To “clear” it, you must: - Get it right twice after the last mistake, not necessarily in a row (this matches your suggestion). * I’ll keep mixing weak words into later questions until they pass that bar. This is our practical mastery rule. It’s not a formal 95% guarantee on its own, but it’s a strong, efficient habit-former. ==== Let’s pick a simple, explicit model so I can be honest about what the quizzes mean. ==== Assumptions: * Each time I quiz a word, it’s a Bernoulli trial: correct / incorrect. * Independence across trials (and across words), as you allowed. * Define “I know this word” as: > Use a simple Bayesian model with a flat prior (Beta(1,1)): * After s correct and 0 wrong for a given word: Posterior = Beta(s+1, 1) * Probability that p ≥ 0.7 given this data: P(p≥0.7∣s correct)=1−0.7s+1P(p \ge 0.7 \mid \text{s correct}) = 1 - 0.7^{s+1}P(p≥0.7∣s correct)=1−0.7s+1 Solve for 95%: * Need 1−0.7s+1≥0.951 - 0.7^{s+1} \ge 0.951−0.7s+1≥0.95 * That gives s ≈ 8 (8 correct, no misses) - Check: with 8 correct, P(p≥0.7)≈96%P(p \ge 0.7) \approx 96\%P(p≥0.7)≈96%. 👉 So: If you answer a word correctly ~8 times (without error), I can say with ~95% confidence (under this model) that you “know” it at p ≥ 0.7. That’s the strict interpretation. ===== To avoid absurd drill length: ===== * New words: - I’ll aim to hit each 3–5 times per session. - If you’re perfect on a word across those, I’ll treat it as high confidence for now. - If you stick with me across multiple sessions, those 3–5 reps will accumulate toward that “8-ish” target naturally. * Missed words: - Get extra weight until you’ve gotten them right at least twice after last error. - Those are the ones where we’re actively pushing confidence up. * Old words: - 15–20 sprinkled per session. - Same logic: misses go to weak bucket and get recycled. So: * The algorithm is statistically motivated, but tuned for human sanity. * I’ll silently keep track of: - which words are “new,” “good,” or “suspicious,” - and weight quiz frequency accordingly. No need for you to manage any of this; just answer. If all that sounds good, go ahead and drop: Genki I – Chapter 1 # Intro dialogue (i) # Vocab list (ii) I’ll: * acknowledge the dialogue briefly, * immediately start a targeted vocab quiz on the Chapter 1 list, * include a few review words from your Duolingo/known basics.
Summary:
Please note that all contributions to freem are considered to be released under the Creative Commons Attribution-ShareAlike 4.0 (see
Freem:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)