Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
freem
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Openai/694057b6-101c-8007-9a65-c40578c7252d
(section)
Add languages
Page
Discussion
English
Read
Edit
Edit source
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
Edit source
View history
General
What links here
Related changes
Special pages
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Assistant: === ===== When the worker starts llama-server, it needs a cheap, reliable way to decide “the server is ready to accept requests” (and later, “it’s still alive”). ===== So the missing decision is: what exact HTTP request should the worker use as its readiness/health probe? Recommended default for llama-server: * Ready check: GET /v1/models - If it returns HTTP 200 + parses as JSON, consider the worker READY. * If that endpoint ever proves unreliable in your build: fall back to a tiny POST /v1/chat/completions with max_tokens=1 (warmup-style probe), but only if needed. This probe is separate from “slow prefill” timeouts—health probes should answer “is the server reachable and responding at all?” not “is this particular request producing tokens yet?” I’ll update the doc wording to make that explicit. ===== Agreed. We’ll define minimum worker status as: ===== * READY (server is up and probe succeeds; can accept work) * RUNNING (process is started but not yet READY, or currently serving—implementation can keep READY/RUNNING separate or collapse to READY as “running + ready”) * FAILED (process exited or repeated probe failures / crash-loop lockout) Per-request output length, tokens received, tokens/sec become optional nice-to-haves, not required. ===== Great. We’ll keep the behavior: ===== * If both normal tools and exit-tools appear, the worker processes normal tool calls as usual and records exit-tool signals whenever they appear. * BIOS guidance will encourage models to use exit-tools near completion, but the worker won’t rely on that for correctness. If you want, I can now produce the final “Types & Protocols” appendix consistent with these clarifications (minimal WorkerState enum, probe config, and typed status/result records).
Summary:
Please note that all contributions to freem are considered to be released under the Creative Commons Attribution-ShareAlike 4.0 (see
Freem:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)