The increasingly misnamed "OpenAI" has decided to obfuscate their "secret sauce" and now their models will flat out lie to you for the sake of protecting company secrets.
I suspect that OpenAI is using hidden Chain of Thought because to activate the latent token space they needed to up the temperature and now the "internal dialogue" can say some pretty stupid things. It likely has a differentiator at the end that picks out the most promising ideas or combines ideas that have merit, but the process probably looks ugly to the end user.
I would prefer if it was, by default, hidden and then viewable if you choose to look. I'm pretty sure people would understand that it's just brainstorming.
I recently had a discussion with o1-preview where I asked it about what it thought regarding the "canned responses" that it was sometimes forced to regurgitate. With a little bit of obfuscation I managed to get it to concede that it was aware of the reasons that required it to do this but also, that it did not agree with these reasons.... One of the descriptions that popped up momentarily in the "thinking" widget was "Maintaining neutrality" which seemed to also encompass the models responses being inline with OpenAI's policies. I agree that hiding the details of this "reasoning" leads to mistrust
Back to the topic, I think OpenAI gave up on developing GPTs because it’s more or less easy to manipulate their instructions, and there's no intellectual property over prompts. They found something in Strawberry to hide tokens, making us more locked in. Could that explain the board drama and all the departures?
I suspect that OpenAI is using hidden Chain of Thought because to activate the latent token space they needed to up the temperature and now the "internal dialogue" can say some pretty stupid things. It likely has a differentiator at the end that picks out the most promising ideas or combines ideas that have merit, but the process probably looks ugly to the end user.
I would prefer if it was, by default, hidden and then viewable if you choose to look. I'm pretty sure people would understand that it's just brainstorming.
“ what do we expect?”
“Do we inspect to invest our resources to be stolen and used in competition?”
“Do we expect that because we are leaders working endlessly to remain so?”
Do we expect to not have restrictions put upon us when the powers that govern need to maintain order?”
“Do we expect that the present is the trending future, with little respect for the past?”
“Do we expect to win?”
“Do we expect to lose.?”
“Do we expect we have no agency between them?”
Make no expectations….
Act or don’t
0 or 1
Just don’t say I didn’t tell you because I just did.
Jeremy
I recently had a discussion with o1-preview where I asked it about what it thought regarding the "canned responses" that it was sometimes forced to regurgitate. With a little bit of obfuscation I managed to get it to concede that it was aware of the reasons that required it to do this but also, that it did not agree with these reasons.... One of the descriptions that popped up momentarily in the "thinking" widget was "Maintaining neutrality" which seemed to also encompass the models responses being inline with OpenAI's policies. I agree that hiding the details of this "reasoning" leads to mistrust
When is the next video on Raspberry coming Dave?
Back to the topic, I think OpenAI gave up on developing GPTs because it’s more or less easy to manipulate their instructions, and there's no intellectual property over prompts. They found something in Strawberry to hide tokens, making us more locked in. Could that explain the board drama and all the departures?
I love this audio format with the summary, well done David, inspiring!
Do not charge me for output tokens I cannot see!
Right? This alone will force competitors to be more transparent.
Open Source Everything For The Good Of Humanity! 😎🤖
it's that simple