Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
freem
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Openai/692c91e4-ba20-800d-8366-e5c7ed15aa7b
(section)
Add languages
Page
Discussion
English
Read
Edit
Edit source
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
Edit source
View history
General
What links here
Related changes
Special pages
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Assistant: Short answer: several people and groups have tried—at two levels. === Short answer: several people and groups have tried—at two levels. # Foundational, formal attempts (Solomonoff/Hutter/Chaitin/MDL) — aim to formalize Occam’s razor as algorithmic probability or description length and show (theoretically) that simpler computable theories get higher posterior weight. Key references: Ray Solomonoff’s formal theory of inductive inference (1964) and Marcus Hutter’s Universal AI / AIXI development (book/papers) make this explicit. Ray Solomonoff<ref>{{cite web|title=Ray Solomonoff|url=https://raysolomonoff.com/publications/1964pt1.pdf|publisher=Ray Solomonoff|access-date=2025-12-10}}</ref> # Practical rediscovery systems — algorithms that actually take raw numeric/experimental data and output concise physical laws (thus operationalizing a formal simplicity bias). Representative, highly-cited projects: * Bongard & Lipson (2007) — automated reverse-engineering of nonlinear dynamical systems (genetic programming → symbolic equations). They show low-dimensional laws can be discovered from trajectories. Meclab<ref>{{cite web|title=Meclab|url=https://meclab.w3.uvm.edu/papers/2007_PNAS_Bongard.pdf|publisher=meclab.w3.uvm.edu|access-date=2025-12-10}}</ref> * Schmidt & Lipson (Science 2009) — “Distilling free-form natural laws from experimental data” (symbolic regression with a parsimony/accuracy tradeoff; famously rediscovered Newtonian pendulum law). Science<ref>{{cite web|title=Science|url=https://www.science.org/doi/abs/10.1126/science.1165893|publisher=science.org|access-date=2025-12-10}}</ref> * Brunton, Proctor & Kutz (SINDy, 2016) — sparse identification of governing equations: use sparsity (an Occam-like prior) to pick few terms from a large candidate library. Widely used and scalable. PNAS<ref>{{cite web|title=PNAS|url=https://www.pnas.org/doi/10.1073/pnas.1517384113|publisher=pnas.org|access-date=2025-12-10}}</ref> * Rudy et al. (PDE-FIND, 2017) — extends sparse discovery to partial differential equations (spatial systems). Science<ref>{{cite web|title=Science|url=https://www.science.org/doi/10.1126/sciadv.1602614|publisher=science.org|access-date=2025-12-10}}</ref> * Udrescu & Tegmark (AI-Feynman, 2019/2020) — physics-inspired symbolic regression that rediscovers many textbook laws by combining neural nets with simplification heuristics (very high success rate on Feynman-equation benchmarks). arXiv<ref>{{cite web|title=arXiv|url=https://arxiv.org/abs/1905.11481|publisher=arxiv.org|access-date=2025-12-10}}</ref> # Many follow-ups and hybrids — sparse + physics priors, physics-informed neural nets, deep symbolic regression, PySINDy / DeepMoD, and packages that automate the pipeline from raw measurements to compact models (literature surveys document this growth). PMC<ref>{{cite web|title=PMC|url=https://pmc.ncbi.nlm.nih.gov/articles/PMC7861345/|publisher=pmc.ncbi.nlm.nih.gov|access-date=2025-12-10}}</ref> Do these solve the philosophical underdetermination problem? No, not completely. * Practically: these methods make underdetermination tractable — they often single out compact, predictive models from noisy data and scale better than exhaustively human-searching hypothesis space (Schmidt/Lipson, SINDy, AI-Feynman are good examples). Science<ref>{{cite web|title=Science|url=https://www.science.org/doi/abs/10.1126/science.1165893|publisher=science.org|access-date=2025-12-10}}</ref> * Philosophically: underdetermination persists because any algorithm needs representational choices (language of functions, basis terms, noise model, numerical tolerances, search primitives, priors). Those choices reintroduce “human” bias or background assumptions. Formal Occam priors (Solomonoff/MDL) prove attractive objects exist in theory, but they’re uncomputable (Solomonoff) or sensitive to the chosen universal machine / code / modelling language in practice — so you haven’t eliminated underdetermination, you’ve moved it into design choices. For the general philosophical discussion, see treatments of underdetermination and critiques of algorithmic/ Bayesian simplicity. Ray Solomonoff<ref>{{cite web|title=Ray Solomonoff|url=https://raysolomonoff.com/publications/1964pt1.pdf|publisher=Ray Solomonoff|access-date=2025-12-10}}</ref> Useful short takeaway * Who’s tried it: Bongard & Lipson; Schmidt & Lipson; Brunton/Proctor/Kutz (SINDy); Rudy et al. (PDE-FIND); Udrescu & Tegmark (AI-Feynman); plus the theoretical lineage (Solomonoff, Hutter, Chaitin, Rissanen/MDL, Grünwald). Ray Solomonoff<ref>{{cite web|title=Ray Solomonoff|url=https://meclab.w3.uvm.edu/papers/2007_PNAS_Bongard.pdf|publisher=Ray Solomonoff|access-date=2025-12-10}}</ref> * Status: strong practical progress (equation discovery works on many real systems), but no definitive philosophical resolution of underdetermination — methods reduce the space of plausible rivals by preferring simplicity, but they don’t make that preference theoretically neutral or eliminate all rival explanations. If you want, I can: * give a short annotated bibliography (2–3 lines each) of the most relevant papers above, or * compare how each method formalizes Occam’s razor (e.g., symbolic-regression parsimony penalties vs. MDL vs. Solomonoff priors) with pros/cons. Which would you prefer?
Summary:
Please note that all contributions to freem are considered to be released under the Creative Commons Attribution-ShareAlike 4.0 (see
Freem:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)