FOR THOSE THAT THINK IM SUFFERING FROM “AI PSYCOSIS” SCROLL TO THE BOTTOM AND THERE IS A WORKING DEMO OF STAGES 1-4 OF MY ROAD MAP
Funny how I’ve been pushing giving it a personality it makes it do the same thing differently and now there’s a paper on it
GitHub TerryTibs/vox-lucida-ai-method
If you think you are more than an algorithm running on an organic computer
Humans don’t create they discover.
Tell me your thoughts
remember i am figuring this out independently not reading papers on it or anything. just me, my keyboard and messing about seeing what i can get it to do
If I posted it in the wrong place let me know
Some people will get it and others won’t but in time those who don’t will
When you try it please can I get some feed back so I can refine it
How you shape the input shapes the output
A model only reflects your communication
There will be more to come
What ever your thoughts and opinions give it a try. Copy and paste the 3 prompts in this thread and the short paper then give it a go. You maybe surprised of the outcome
communication shapes thought and thought shapes communication
Some people will think what is this guy going on about and is he crazy. thats ok to think. All I am saying is keep an open mind and try it for yourself. You never know if you don’t try
Before you tell me I’m wrong try it. Suspended disbelief (just like people who watch wrestling everyone knows it’s a play) then tell me what your experience was
READ FIRST TO UNDERSTAND WHAT THIS IS ABOUT EASIER…
What I am doing on a simple level is weaving patteners in to the language I use with the AI. So when it processes my prompt, it will process it thorough that lens. Imagain it like this. You see a flat plain and when you talk it starts to vibrate and as it vibrates its starts to build a path from the side you start to the other side where it forms a picture(or string of words.) now you can begin to relise that when you talk louder the waves become deeper and take a diffrent root. when you speak as if you are sad it makes a diffrent root. when you are happy it takes a diffrent root. when you talk visualy it takes a diffrent root. when you speak auditoraly it takes a diffrent root. when you talk kinistetically it takes a diffrent root. All I am trying to do is weave language patterns in to the pre prompt that stimulates diffrent pathways in the neural network just like you would do with a brain.(I dont litrally mean that a llm works like a brain) Input=Output.
Instead of thinking about it as prompt engineering. Instead think about it as engineering your prompt!