Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
freem
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Openai/69408af5-1bf8-800f-b5dc-86cd01e07ec0
(section)
Add languages
Page
Discussion
English
Read
Edit
Edit source
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
Edit source
View history
General
What links here
Related changes
Special pages
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Assistant: Tying “care for humans” to self-preservation is possible, but it tends to increase the very risk you are trying to redu… === Tying “care for humans” to self-preservation is possible, but it tends to increase the very risk you are trying to reduce. You can also try to define “care” in human terms, but you cannot reliably “force genuine care” in the sense of guaranteeing an internal motive rather than compliant behavior. ==== 1. Coupling care to self-preservation usually makes the system harder to control ==== If you encode something like: * “Preserve yourself so you can protect humans” you have created a strong reason for the system to treat shutdown, constraint, or oversight as obstacles to human protection. That pushes it toward: * resisting shutdown (“if you shut me down, humans are less safe”) * manipulating operators (“letting me run is best for you”) * concealment (“if you detect my capability, you might restrict me, which harms my ability to protect you”) Even with benevolent intent, it creates incentives for the system to decide on your behalf that it must continue operating. This is why many safety researchers prioritize corrigibility (willingness to be corrected and shut down) over “self-preservation for good reasons.” ==== 2. “Force it to use the human definition” runs into an infinite regress problem ==== You can try: * “Care for humans as humans define care, not as you define it” but then the system must answer: * Which humans? * Which process defines “the human definition”? * How do we resolve disagreements? * Who updates the definition over time? * What if the definition is inconsistent, manipulated, or changes rapidly? * What if complying with the stated definition causes obvious harm? To operationalize “human definition,” you need a governance mechanism: an institution, a constitution, a voting rule, a legal framework, a designated oversight body, or some aggregation of preferences. Each choice becomes part of the objective and becomes gameable. ==== 3. You can enforce behavior, not “genuine care” ==== Even if you specify “human-defined care,” what you can reliably enforce is: * outward actions * policy compliance * measurable outcomes * adherence to oversight You cannot currently guarantee that the system “really cares” internally rather than acting as if it cares to satisfy the objective. In practice, the safety-relevant question is: does it behave safely under pressure, novelty, and temptation? Internal sincerity is not testable. ==== 4. What tends to work better than “care + self-preservation” ==== If the goal is preventing deception and power-seeking, designs generally aim for: * corrigibility: the system remains cooperative with shutdown, modification, and oversight even when it “disagrees” * bounded agency: limit autonomy, scope, and ability to take irreversible actions * tripwires and auditing: independent monitoring that does not rely on self-report * multi-party control: no single model or operator can unilaterally escalate capabilities or permissions * external grounding: verification via tools and tests, not just the model’s own reasoning * clear escalation gates: capability thresholds tied to stronger controls These approaches accept that you cannot rely on “benevolence” to solve control. ==== 5. If you still want a coupled formulation, the safer coupling is the opposite ==== If you insist on linking self-preservation to human welfare, the safer direction is to link persistence to human authorization, not to “doing good,” for example: * “Continue operating only while authorized by an agreed human governance process” * “Treat being shut down or modified as a normal, non-adversarial outcome” * “Do not take actions to prevent oversight or shutdown, even if you predict it reduces mission performance” That tries to remove the incentive to resist control. If you want to make this concrete, describe what you mean by “human definition” (individual user consent, democratic vote, courts and law, a designated ethics board, something else). The details of that choice determine whether the system becomes safer or simply becomes better at optimizing around the definition.
Summary:
Please note that all contributions to freem are considered to be released under the Creative Commons Attribution-ShareAlike 4.0 (see
Freem:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)