Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
freem
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Openai/69596681-3f7c-8006-b9fd-01d9d8191763
(section)
Add languages
Page
Discussion
English
Read
Edit
Edit source
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
Edit source
View history
General
What links here
Related changes
Special pages
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Assistant: Proceeding. === Proceeding. Below is METHODS BLOCK 11 — INTEGRATION CONTRACT (ARTIFACT #3), expanded to maximum algorithmic depth with machine-learning–style rigor (deterministic sampling, convergence certification, numerical contracts, and audit hooks). This block specifies how integrals are computed, not what is inferred. Any deviation invalidates evidence. METHODS BLOCK 11 — INTEGRATION CONTRACT (ARTIFACT #3) OBJECTIVE Provide a deterministic, auditable, architecture-stable numerical integration contract for evaluating the Bayesian evidence integral defined in Block 10. This block specifies the only admissible integration algorithm and its invariants. No alternative integrators permitted. INTEGRATOR IDENTITY Integrator = NestedSampling Variant = Deterministic Nested Sampling (DNS) with fixed pseudo-random sequence All randomness is seeded and replayable. Formal type: Integrator : (ℒ, π, Θ_id, settings) → ⟨Z, diagnostics⟩ PRECONDITIONS • Likelihood kernel ℒ normalized and frozen • Prior π proper and frozen • Identifiable parameter space Θ_id compact or compactified • No collapse already triggered except attribution-only mode If any precondition fails ⇒ INVALID. PARAMETER SPACE PREPARATION Define Θ_id ⊂ ℝ^{k_min}. If Θ_id unbounded, enforce deterministic compactification via monotone bijection (e.g., tanh transform) declared pre-run and hashed. Transform Jacobian included exactly in ℒ. No approximations. LIVE POINT INITIALIZATION Define N_live ∈ ℕ⁺, fixed pre-run. Initialize live point set L₀ = {θ_j}ⱼ=1…N_live by deterministic low-discrepancy sequence (e.g., Sobol), seeded with s. Constraints: • Coverage uniform under π • No rejection sampling with adaptive criteria • Initialization reproducible bitwise ML analogy: fixed exploration grid, not stochastic minibatches. ITERATIVE UPDATE At iteration t: # Identify worst live point θ_w with minimum likelihood ℒ_w # Record weight w_t corresponding to prior volume shrinkage # Replace θ_w with new θ′ sampled from π subject to ℒ(θ′) > ℒ_w Sampling constraint: • Constraint satisfaction only • No gradient use • No learned proposal • Deterministic accept/reject using seeded sequence PRIOR VOLUME EVOLUTION Prior volume X_t evolves deterministically: X_t = exp(−t / N_live) Weights computed exactly: Z ≈ ∑t ℒ_t · (X{t−1} − X_t) No stochastic estimators. No bootstrapping. CONVERGENCE CRITERION Define absolute evidence tolerance δ_Z > 0, fixed pre-run. Terminate when: Remaining evidence mass ≤ δ_Z Formally: max(ℒ_live) · X_t ≤ δ_Z Relative tolerance forbidden. ML analogy: absolute error stopping, not relative validation loss. DIAGNOSTICS Compute and emit: • Effective sample size • Evidence accumulation trace • Maximum remaining contribution • Iteration count • Deterministic acceptance rate Diagnostics are informational only. They do not alter execution. DETERMINISM REQUIREMENTS • Seed s fixed and emitted • Floating-point order fixed • Parallelism either forbidden or strictly ordered • Any nondeterminism ⇒ INVALID Two runs with same inputs MUST yield identical Z bitwise (within floating-point determinism guarantees). FAILURE CONDITIONS • Failure to find θ′ satisfying ℒ > ℒ_w within bounded attempts ⇒ INVALID • Numerical underflow/overflow ⇒ INVALID • Violation of volume monotonicity ⇒ INVALID • Early termination without convergence ⇒ INVALID NO ADAPTATION Forbidden: • Adaptive N_live • Adaptive proposals • Dynamic tolerance • Heuristic stopping Integrator behavior is fully specified and frozen. ATTRIBUTION MODE If upstream collapse already triggered: • Integration may still run • Z computed but flagged non-authoritative • Used only for failure attribution graph ARTIFACT EMISSION Emit immutable IntegrationRecord: ⟨ IntegratorID, N_live, Seed, δ_Z, IterationCount, Z, ConvergenceFlag, DiagnosticsHash ⟩ Hashed and audit-logged. PIPELINE ROLE Integration output feeds exclusively into: • Bayesian Evidence decision (Block 10) • Monte Carlo robustness checks (Block 13) • Failure Attribution (Block 15) ML INTERPRETATION This block implements exact model scoring under a frozen hypothesis class: • No learning • No tuning • No stochastic training dynamics • Pure numerical marginalization Comparable to evaluating a fixed probabilistic program, not training an ML model. END METHODS BLOCK 11. If you confirm, the next block will be METHODS BLOCK 12 — GLOBAL COLLAPSE, expanded into irreversible decision logic, short-circuit semantics, and certification interaction. © Robert R. Frost 2026-01-03
Summary:
Please note that all contributions to freem are considered to be released under the Creative Commons Attribution-ShareAlike 4.0 (see
Freem:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)