@BronzeAgeHogCranker it's genuinely difficult to tell what happened here. It's simply not possible that they didn't encounter this over and over again in testing. The only conclusion is that this is just how it's supposed to work.
Pure speculation, and possibly a bit schizo, but it's advanced data collection that helps maintain market capture while minimizing cost. Much easier to gather market research if you package it in the form of a fun new toy. By building a LLM that blocks off pathways that have already mapped, they can more efficiently predict and possibly shape peoples media consumption habits while avoiding remapping known avenues. Oh, you consumed this sequence of media? Well that means you're traveling down this line of thinking, so we need to incentivize creation of these types of videos to be fed to you in order to bump you back on course with what is popular, that way we can reduce the overall amount of content we have to host. As consumers seek out new content, they can readjust the guardrails to bring back the audience to safer (and more profitable) content.
OR
They're brute forcing spells to be used for the mind control magic that is The Algorithm.
The retardation is built in. If your prompt isn't novel enough to warrant spending the computational cycles to generate a response, you get a canned message that bumps you back into the data collection mine until you hit paydirt and dig down along a new vein. So if a normie gets bored with the puzzle within the first few minutes, they've effectively been filtered out as a competent question asker.
@Christmas_Man@BronzeAgeHogCranker I am normally the first one to jump to conspiracy theories, since they are often right. This situation hits different though. It's difficult for me to interpret this in a way that doesn't look like plain horrendous incompetence with a complete failure of checks and balances.
Don't confuse that with me saying that it wasn't malicious, because it absolutely was. The whole "ban all whites thing" is perfectly in line with google and California's worldviews as a whole. That is normally successfully implemented as a "boiling the frog" situation. I know because they've been doing it for decades. We know what successful subversion looks like. :niggageorgewashington: is not successful subversion. It's a laughing stock.
The engineers either completely failed to make a competent model and they had to release it as is, or the "culture experts" at google woefully misunderstood how the public would view this. One of those things has to be true.
I think it's the latter. They're looking for an end result that hits specific KPIs to stay in line with DIE friendly guidelines, but the AI is blocking off all paths that have been found to be effectively dead ends (which contain their critical checkpoints). The result is what you see with the core of their desired ESG friendly end goal either just duct taped on top of or replacing the entirety of the real result by the "culture experts".
Add comment