Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
freem
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Openai/691a41cd-2efc-800c-9eff-de439224a90d
(section)
Add languages
Page
Discussion
English
Read
Edit
Edit source
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
Edit source
View history
General
What links here
Related changes
Special pages
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==== This is where it becomes lethal for “LLM + old database” pain. ==== ===== Most attempts look like this: ===== # User: “Show me all customers who churned after a failed payment and then came back within 90 days.” # System: Prompt an LLM with the DB schema. # LLM: Generates SQL. # DB runs it. Sometimes correct. Sometimes totally wrong. # Next day, slightly different wording, the LLM generates different SQL with different bugs. Problems: * No persistent memory of the schema beyond each prompt. * No guarantees on correctness or safety. * No shared semantics across multiple DBs / services / logs. * If you change schema, your prompt recipes break. ===== SAIQL is designed to be both: ===== # The “brain” language – what you or an AI thinks in about your data. # The compiler – it translates SAIQL → SQL / NoSQL / vector / filesystem ops. Workflow: # You define your semantic mapping once: - “This customers table’s closed_at means the churn date.” - “This payments table’s status = 'failed' is a failed payment event.” - “This log stream + these docs are all related to incident reports.” # SAIQL stores that mapping in LoreTokens (your semantic schema). # When you ask a question like: > SAIQL: - Knows which tables and fields correspond to those concepts. - Compiles a correct, tested SQL query (or multiple queries across systems). - Executes them deterministically. - Returns results back in a semantic structure (LoreTokens), which AI can then summarize, visualize, or act on. # If your schema changes later (“we renamed closed_at to churned_at”): - You update the semantic mapping once. - All SAIQL queries that use the “customer churn” concept still work. So SAIQL acts as a stable semantic API over unstable physical schemas. ===== Old way: ===== * Every team builds ad-hoc SQL, Python, and brittle prompt chains to interrogate EHR data or trade histories. * Every time a column name or table structure changes, half your analytics break. SAIQL way: * “Encounter,” “diagnosis,” “medication order,” “trade,” “limit order,” “emergency stop,” etc. become semantic entities encoded as LoreTokens. * SAIQL knows: - Which tables/fields implement those entities in each system. - How to convert from one system’s schema to another’s. Then: * An AI agent can ask SAIQL: > * SAIQL compiles that into: - A set of SQL queries across multiple EHR tables - Possibly a vector search over clinical notes - Filters on lab results But the AI never has to learn the raw schema. It just speaks SAIQL-level concepts.
Summary:
Please note that all contributions to freem are considered to be released under the Creative Commons Attribution-ShareAlike 4.0 (see
Freem:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)