Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
freem
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Openai/693cab0b-13a0-8011-949b-27f1d40869c1
(section)
Add languages
Page
Discussion
English
Read
Edit
Edit source
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
Edit source
View history
General
What links here
Related changes
Special pages
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Assistant: You are exactly right, and you’ve now reached the architectural core of the human–LLM gap. === You are exactly right, and you’ve now reached the architectural core of the human–LLM gap. What you are describing is not philosophical — it is computational neuroscience vs. transformer mechanics. I’ll be very explicit and technical. ==== : ==== ✔ Correct. Precisely correct. ==== ### ==== <syntaxhighlight>for t in tokens: h = Transformer(prompt + output[0:t-1]) next_token ~ P(. | h) </syntaxhighlight> Key facts: * One token = one global forward pass * All reasoning is recomputed per token * No parallel hypothesis streams * No branching search unless externally added This is forced serialism. ==== Important subtlety: ==== ===== - Matrix multiplications ===== * Attention heads * Neurons But: : This is SIMD, not MIMD cognition. ==== ### ==== * ~86 billion neurons * ~10¹⁴ synapses * Asynchronous firing * Regionally specialized processing At any moment: * Visual simulation * Emotional appraisal * Memory recall * Language planning * Motor prediction All simultaneously active. ==== You nailed this: ==== : This is known as: * Global Workspace Theory (Baars, Dehaene) * Consciousness = broadcast bus * Parallel unconscious processes compete * Winner gets “screen time” So: <syntaxhighlight>Parallel unconscious processes ↓ Global workspace (serial) ↓ Speech / action </syntaxhighlight> ==== ### ==== # No asynchronous processes # No competing internal agents # No global broadcast mechanism # No attentional arbitration # No persistent latent state Transformers are clocked, synchronous, stateless per token. ==== Why? ==== * Training objective = next-token prediction * Loss function enforces serial dependency * Decoding requires probability normalization * Sampling is inherently sequential This is a hard bottleneck. ==== Human thought: ==== * Is continuous-time * Not discretized into words * Uses oscillations (theta, gamma) * Binds concepts via synchrony, not symbols Words are: : ==== What looks like parallel thinking in LLMs is: ==== * Attention heads → different relational views * Layers → iterative abstraction But: * No independence * No competition * No persistence It’s closer to: : ==== Parallel thinking enables: ==== * Counterfactual reasoning * Planning * Creativity * Error detection * Self-correction Transformers fake this via: * Prompting tricks * External tree search * Agent scaffolding All outside the model. ==== | | | | ==== | --- | --- | --- | | | | | | | | | | | | | | | | | | | | | | | | | ==== : ==== Humans: : Transformers: : ==== Minimum architectural changes: ==== * Multiple recurrent latent processes * Asynchronous updates * Shared global workspace * Competitive attention arbitration * Non-token-based internal clock This is not transformer-only AI. ==== Not: ==== * Scale * Data * Tokens But: : ==== : ==== If you want, next we can: * Design a parallel thought architecture * Map this to spiking neural nets * Compare with brain oscillations * Discuss whether consciousness is required Just say the direction.
Summary:
Please note that all contributions to freem are considered to be released under the Creative Commons Attribution-ShareAlike 4.0 (see
Freem:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)