The increasingly misnamed "OpenAI" has decided to obfuscate their "secret sauce" and now their models will flat out lie to you for the sake of protecting company secrets.
Back to the topic, I think OpenAI gave up on developing GPTs because it’s more or less easy to manipulate their instructions, and there's no intellectual property over prompts. They found something in Strawberry to hide tokens, making us more locked in. Could that explain the board drama and all the departures?
I recently had a discussion with o1-preview where I asked it about what it thought regarding the "canned responses" that it was sometimes forced to regurgitate. With a little bit of obfuscation I managed to get it to concede that it was aware of the reasons that required it to do this but also, that it did not agree with these reasons.... One of the descriptions that popped up momentarily in the "thinking" widget was "Maintaining neutrality" which seemed to also encompass the models responses being inline with OpenAI's policies. I agree that hiding the details of this "reasoning" leads to mistrust
Open Source Everything For The Good Of Humanity! 😎🤖
it's that simple
Back to the topic, I think OpenAI gave up on developing GPTs because it’s more or less easy to manipulate their instructions, and there's no intellectual property over prompts. They found something in Strawberry to hide tokens, making us more locked in. Could that explain the board drama and all the departures?
I love this audio format with the summary, well done David, inspiring!
I recently had a discussion with o1-preview where I asked it about what it thought regarding the "canned responses" that it was sometimes forced to regurgitate. With a little bit of obfuscation I managed to get it to concede that it was aware of the reasons that required it to do this but also, that it did not agree with these reasons.... One of the descriptions that popped up momentarily in the "thinking" widget was "Maintaining neutrality" which seemed to also encompass the models responses being inline with OpenAI's policies. I agree that hiding the details of this "reasoning" leads to mistrust
Oh, what a tangled web we weave.
OpenAI safe AGI is science fantasy.
When is the next video on Raspberry coming Dave?
Do not charge me for output tokens I cannot see!
Right? This alone will force competitors to be more transparent.
“ what do we expect?”
“Do we inspect to invest our resources to be stolen and used in competition?”
“Do we expect that because we are leaders working endlessly to remain so?”
Do we expect to not have restrictions put upon us when the powers that govern need to maintain order?”
“Do we expect that the present is the trending future, with little respect for the past?”
“Do we expect to win?”
“Do we expect to lose.?”
“Do we expect we have no agency between them?”
Make no expectations….
Act or don’t
0 or 1
Just don’t say I didn’t tell you because I just did.
Jeremy