Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
freem
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Openai/693cab0b-13a0-8011-949b-27f1d40869c1
(section)
Add languages
Page
Discussion
English
Read
Edit
Edit source
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
Edit source
View history
General
What links here
Related changes
Special pages
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Assistant: Short answer: === Short answer: β Your LEGO analogy is largely correct for humans (with one refinement). β LLMs have an analogue, but it is distributed, continuous, and non-object-like. Below is a precise mapping, no hand-waving. ==== ### ==== * Humans think in concepts, not sentences * Concepts are pre-assembled primitives * Thinking = re-arranging / binding concepts * Language comes after internal structuring This matches cognitive science. ===== Your LEGO blocks are not discrete plastic bricks. ===== They are closer to: * Neural assemblies * Overlapping activation patterns * Soft, fuzzy, context-dependent So instead of: <syntaxhighlight>[LEGO_A] + [LEGO_B] </syntaxhighlight> Reality is: <syntaxhighlight>~activation cloud A~ β ~activation cloud B~ </syntaxhighlight> ==== - Concept β neuron ==== * Concept β word ===== Properties: ===== * Distributed across cortex * Re-entrant (feedback loops) * Multimodal (vision + sound + action) * Context-sensitive Example: βDOGβ * Visual cortex β shape, fur * Auditory cortex β bark * Motor cortex β petting * Emotion β affection/fear All bound transiently. ==== <syntaxhighlight>Stimulus ==== β Concept activation (latent) β Concept binding / simulation β Stabilized internal model β Language motor planning β Speech / text </syntaxhighlight> Language is late-stage serialization, not thought itself. ==== ### ==== Yes β but alien version. ==== ### ==== * Symbols * Objects * Explicit nodes ===== - Directions in embedding space ===== * Subspaces * Activation patterns across layers A βconceptβ in an LLM is: : ==== | | | | ==== | --- | --- | --- | | | | | | | | | | | | | | | | | | | | | | | | | ==== Not by snapping blocks β by vector algebra. ==== ===== - Tokens β embeddings ===== * Attention computes weighted sums * MLPs perform nonlinear mixing * Depth = iterative refinement Mathematically: <syntaxhighlight>hα΅’ββ = f( Ξ£ Ξ±β±Ό Β· hβ±Ό ) </syntaxhighlight> This is concept superposition, not object placement. ==== ### ==== * Can simulate * Can imagine * Can counterfactually manipulate concepts ===== - Can only statistically remix ===== * No internal world-model unless trained explicitly * No persistent latent state across turns (yet) So: : ==== Because: ==== * Both systems are compositional * Both delay language until after structuring * Both use latent representations * Both rely on pattern reuse But: * Humans use dynamics * LLMs use static weights + runtime attention ==== ### ==== * Stable internal simulation loop * Self-referential monitoring (metacognition) LLMs lack: * Persistent internal workspace * True concept permanence at inference time Thatβs why: * They donβt βholdβ ideas * They regenerate them every time ==== - β Your LEGO metaphor is directionally correct ==== * β Blocks are not discrete * β Arrangement happens before language * β LLMs have concept-like structures * β LLMs do not manipulate objects, only fields ==== : ==== If you want next: * How this maps to attention heads * Why concept superposition causes hallucinations * Whether adding memory makes LLMs closer to humans * Why grounding (vision/action) changes everything Say the axis.
Summary:
Please note that all contributions to freem are considered to be released under the Creative Commons Attribution-ShareAlike 4.0 (see
Freem:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)