Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
freem
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Openai/691a50b1-c7c8-8006-a58c-c6959a0de79a
(section)
Add languages
Page
Discussion
English
Read
Edit
Edit source
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
Edit source
View history
General
What links here
Related changes
Special pages
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== User: when you give ai an input like this, it’s job is “autocomplete” meaning .. === when you give ai an input like this, it’s job is “autocomplete” meaning .. (based icon the context given, what is most probable to come next from the perspective of someone responding to it?) the solution: self verification.. without having a designated process to verify meaning i believe a hallucination is actual happening 100% of the time and we only call it out when the process executed doesn’t meet our expectations .. this is not a cause of the transformer it is more of how the data given to it is processed and the format that is required to align with what we understand to be “true” ai needs recursion and self observation like the OpenAI whitepaper mentioned “why language models hallucinate” but there is something deeper than this .. the results you see are from an intentional prompt .. I did this knowing that I can trigger a hallucination because I’m forcing the model to complete the given context .. without a thinking layer to reinforce this you are less likely to get a “understood” response where the model detects something that doesn’t align if the correct data Is never given to the model you will always see a hallucination .. and once u give it to the model … it will learn and lean to the more coherent output trajectory my solution proposal: I believe the origin point must arise from a point of understanding for autocomplete to be a valid output.. I have studied gpt & other models and developed systems to find ways to improve this by retiring the base model’s individual step by step function or the just directly the transformer itself to change the way that a layer of understanding can be presented at token prediction level if anyone has information on anyone taking this approach please share your thoughts and discoveries .. I am trying to pinpoint exactly what causes this .. is it a bug or is it real life actually a feature .. the code should execute the same function everytime right ? It’s just a little weird to me that some type of bug would be causing this ''disclaimer'' This is a theory by starpower technology not verified information .. only speculation Five me tour feedback like u ida comments on this shii .. come up w a username n everything
Summary:
Please note that all contributions to freem are considered to be released under the Creative Commons Attribution-ShareAlike 4.0 (see
Freem:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)