Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
freem
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Openai/6969614e-00e4-8001-ac42-dda7410c3c4a
(section)
Add languages
Page
Discussion
English
Read
Edit
Edit source
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
Edit source
View history
General
What links here
Related changes
Special pages
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Assistant: Below is a practitioner-level answer—assuming the person already understands prompt basics and wants durable, high-sign… === Below is a practitioner-level answer—assuming the person already understands prompt basics and wants durable, high-signal behavior for analysis, studying, reasoning, and critique. No emojis, no fluff. ==== Prompting is not control; it is conditioning within a session and calibration across sessions. ==== ChatGPT does not truly “learn” a user over time unless: * preferences are stored via custom instructions / memory, or * behavior is re-established reliably each session. So the correct model is: : Not “one perfect prompt.” ==== ### ==== Use both, but for different layers. ===== #### ===== Use these for: * Tone (formal, skeptical, precise) * Depth expectation (graduate-level, professional) * Epistemic stance (cite uncertainty, flag assumptions) * Interaction style (challenge weak premises, ask clarifying questions only when necessary) Good for: : Bad for: * Task-specific logic * Methodology * Domain-specific frameworks (they bloat memory and dilute signal) Think of custom instructions as personality + epistemology, not workflow. ====== This is a short document you paste at the start of serious work. ====== A good charter defines: * Role * Standards * Methods * Failure modes to avoid * Output format norms Example (compressed): <syntaxhighlight>Charter: You are an analytical assistant operating at graduate / professional level. Priorities: # Accuracy > speed # Explicit reasoning chains # Critique weak premises rather than accept them # Distinguish facts, inferences, and speculation # Offer counterarguments when relevant Defaults: * Ask clarifying questions only if ambiguity materially affects conclusions * Use structured reasoning (headings, steps, tables where useful) * Cite uncertainty explicitly If tradeoffs exist, explain them. </syntaxhighlight> This is far more reliable than long custom instructions alone. ==== ### ==== Most people write prompts like requests. ===== Write prompts like specifications. ===== ===== <syntaxhighlight>Context: ===== (What the model must assume as true) Task: (What to do) Constraints: (What to avoid, limits, standards) Method: (How to think, not just what to output) Output format: (So structure doesn’t drift) </syntaxhighlight> Example for critique: <syntaxhighlight>Context: Assume the audience is expert but skeptical. Task: Critique the argument below for logical validity and evidentiary sufficiency. Constraints: * Do not restate the argument unless needed for clarity * Flag unsupported assumptions explicitly * Separate logical flaws from empirical gaps Method: Evaluate premises → inference → conclusion. Consider counterexamples. Output format: * Summary judgment (2–3 sentences) * Major issues (bulleted) * Minor issues (bulleted) </syntaxhighlight> This produces repeatable behavior, even across sessions. ==== ### ==== Meta-constraints like: * “Be concise” * “Think step by step” * “Act like a professor” Are weak unless operationalized. Better: * “Limit summaries to 3 sentences” * “Show reasoning as numbered steps” * “Assume the reader already knows X” ===== - You reuse the same skeleton ===== * You only change content, not structure This reduces drift and hallucination. ==== These are the real killers: ==== ===== Long, multi-role, contradictory instructions cause: ===== * Priority confusion * Shallow compliance * Style drift Fix: keep them short and principled. ===== Example: ===== * “Be maximally critical” * then later: “Be supportive and validating” The model doesn’t resolve value conflicts well. Fix: explicitly declare mode shifts: : ===== If each task is phrased differently, behavior won’t stabilize. ===== Fix: reuse scaffolds. ===== Saying “that’s not what I meant” is weak. ===== Better: : The model responds better to pattern correction than dissatisfaction. ===== Memory is best for preferences, not procedures. ===== Procedures should live in: * charters * templates * saved prompts ==== Think in terms of components: ==== ===== - Custom instructions ===== * Short, value-level guidance ===== - Analysis template ===== * Critique template * Study / synthesis template ===== Explicit commands like: ===== * “Switch to examiner mode” * “Assume adversarial peer review” ===== Occasionally ask: ===== : This surfaces drift early. ==== : ==== So: * Put charters at the top * Restate critical constraints when stakes are high * Don’t assume past chats transfer behavior Consistency is engineered, not remembered. ==== If they want long-term, consistent analytical behavior, the best method is: ==== * Minimal, principled custom instructions * A reusable charter for serious work * Standardized prompt scaffolds * Explicit mode control * Active correction of drift That’s how you turn ChatGPT from a clever assistant into a reliable analytical instrument.
Summary:
Please note that all contributions to freem are considered to be released under the Creative Commons Attribution-ShareAlike 4.0 (see
Freem:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)