Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
freem
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Openai/693e3ce6-229c-8008-97dc-ab720cb1f95a
(section)
Add languages
Page
Discussion
English
Read
Edit
Edit source
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
Edit source
View history
General
What links here
Related changes
Special pages
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==== - If you want a worst‑case guarantee over all graphs/instances of the kind stated in your open problem—i.e., a universal δ\deltaδ such that plain FISTA/linear coupling updates ≤δ/ρ\le \delta/\rho≤δ/ρ coordinates each iteration—then I don’t see a path via “standard acceleration analysis” alone, because the whole difficulty is controlling the extrapolated iterates’ active set, and generic proximal identification theory tells you you cannot control that uniformly without a nondegeneracy margin. ==== * The only provable acceleration-with-sparsity result I’m aware of in this line is essentially active-set acceleration (Rubio et al.), which introduces exactly the kind of “subproblem” / restricted-space solves you want to avoid. Proceedings of Machine Learning Research<ref>{{cite web|title=Proceedings of Machine Learning Research|url=https://proceedings.mlr.press/v195/martinez-rubio23b/martinez-rubio23b.pdf|publisher=Proceedings of Machine Learning Research|access-date=2025-12-19}}</ref> ===== The most plausible compromise (theorem-wise) is: ===== # prove a finite identification result for your PageRank instance class (needs a margin condition or a probabilistic model), then # run a single accelerated method on the identified support. This avoids repeated expensive restricted solves, but it won’t give a clean worst-case O~((αρ)−1)\tilde O((\sqrt{\alpha}\rho)^{-1})O~((αρ)−1) unless you can lower-bound the margin in terms of ρ\rhoρ for all graphs—which is exactly the kind of statement that seems false in worst case (symmetries/degeneracy). If you tell me which “standard accelerated algorithm” you have in mind (plain FISTA, strongly-convex APG with fixed β\betaβ, linear coupling variant, etc.), I can be more concrete about which invariants one would need to establish (e.g., order-bounds 0≤yk≤c x\''0\le y_k\le c\,x^\''0≤yk≤cx\*, or a per-iteration certificate that prevents activation outside a small frontier), and where the known theory breaks.
Summary:
Please note that all contributions to freem are considered to be released under the Creative Commons Attribution-ShareAlike 4.0 (see
Freem:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)