Given the situation we are in, the question is what would be the correct approach?
Is it trying to 'make information unavailable' which only incentivizes those who seek the information to do it more vigorously or through other means including a dark net model? (scarcity type response especially when so clearly an artificial one)
Or would it make more sense to have the behavior of Perplexity but with 2 twists:
1. The output is altered in slight but significant ways
2. Unless you are a verified logged in user in which case it would flag you?
One could potentially argue that Sam Altman's 'World' coin concept to remove the verification limitation could be a solution.
But lets face reality: Not only will that never fly (CBDC concerns on steroids for good reasons), it would also not solve the problem of access to dangerous information because such a move would guarantee that no matter the cost, a dark net model for criminals would become a priority to stay as usual one step ahead.
Ultimately, at least what my game theory thinking pushes me towards, is that we are in a situation where we could end in a 'Minority Report' like situation under the pretense of assumed / claimed safety.
Absurdly that wouldn't make things better:
1. Given that DeepSeek might have used OpenAI to train their model it wouldn't be far fetched to assume that crime did something similar for their purposes, making any effort too late?
2. Such a move would only work if only a single AI company exists. Otherwise it becomes possible to piece the information together across many providers or use something like OpenRouter to automate it.
3. Identity verification also faces a second problem: Similar to how workers from Pakistan are willing to work in Saudi Arabia despite the work hazards to feed their family, poor people here could be paid to query for pieces of information (lets call them 'lucrative remote work opportunities anyone with a computer can do') to hide the requesters identity more effectively than it needs to be done today.
It to me seems that a more reasonable approach, if the goal is to prevent harm, could be to not rely on tried and verified non-working solutions like 'hiding information', 'lieing / altering information' and 'draconian policies' and instead focus on building transparency on who requests what type of information to enable others to operate on that layer to build both 'trust' and to identify threats, similar to how Bitcoin operates. Perhaps even bind 'potentially risky question' to transactions on something like the Bitcoin net to be able to gleam a fuller web.
AI safety is a big deal. What kind of technology tool we will need to make AI safer? The legacy safety and security tools like CrowdStrike, SentinelOne do not seem to address this new challenge at all.
AI safety is a misnomer. It's not an AI problem it's a homosapiens issue. We're not collectively evolved enough for any technology to ever be considered safe. It's us, not the machines, who need better guardrails 😂
Brilliant and disturbing
Given the situation we are in, the question is what would be the correct approach?
Is it trying to 'make information unavailable' which only incentivizes those who seek the information to do it more vigorously or through other means including a dark net model? (scarcity type response especially when so clearly an artificial one)
Or would it make more sense to have the behavior of Perplexity but with 2 twists:
1. The output is altered in slight but significant ways
2. Unless you are a verified logged in user in which case it would flag you?
One could potentially argue that Sam Altman's 'World' coin concept to remove the verification limitation could be a solution.
But lets face reality: Not only will that never fly (CBDC concerns on steroids for good reasons), it would also not solve the problem of access to dangerous information because such a move would guarantee that no matter the cost, a dark net model for criminals would become a priority to stay as usual one step ahead.
Ultimately, at least what my game theory thinking pushes me towards, is that we are in a situation where we could end in a 'Minority Report' like situation under the pretense of assumed / claimed safety.
Absurdly that wouldn't make things better:
1. Given that DeepSeek might have used OpenAI to train their model it wouldn't be far fetched to assume that crime did something similar for their purposes, making any effort too late?
2. Such a move would only work if only a single AI company exists. Otherwise it becomes possible to piece the information together across many providers or use something like OpenRouter to automate it.
3. Identity verification also faces a second problem: Similar to how workers from Pakistan are willing to work in Saudi Arabia despite the work hazards to feed their family, poor people here could be paid to query for pieces of information (lets call them 'lucrative remote work opportunities anyone with a computer can do') to hide the requesters identity more effectively than it needs to be done today.
It to me seems that a more reasonable approach, if the goal is to prevent harm, could be to not rely on tried and verified non-working solutions like 'hiding information', 'lieing / altering information' and 'draconian policies' and instead focus on building transparency on who requests what type of information to enable others to operate on that layer to build both 'trust' and to identify threats, similar to how Bitcoin operates. Perhaps even bind 'potentially risky question' to transactions on something like the Bitcoin net to be able to gleam a fuller web.
AI safety is a big deal. What kind of technology tool we will need to make AI safer? The legacy safety and security tools like CrowdStrike, SentinelOne do not seem to address this new challenge at all.
AI safety is a misnomer. It's not an AI problem it's a homosapiens issue. We're not collectively evolved enough for any technology to ever be considered safe. It's us, not the machines, who need better guardrails 😂
Brilliant David.