thinking about GPTs
thesis: we’re obviously not to AGI yet. but people implicitly assume that a “human-equivalent intelligence” means someone knowledgeable and accurate, not “simulation of an accurate average human”
antithesis: maybe GPTs are so convincing because for most people, conversation is much more about rhythm and displays of affinity than actual content
synthesis: maybe we actually have reached human-equivalent intelligence, purely because most human conversation is *also* most-probable-statistic-response meaningless garbage
thinking about GPTs
@gardevoir Someone on the Twitter machine awhile back said, to paraphrase: "maybe we should be less worried about bots passing the Turing test and more about how some people seem to fail it", and that sure has lived rent-free in my head for awhile.
thinking about GPTs
@gardevoir (... yet we contain myriads, and it really only helps explain and cover the laziest of decisions, from reflexive lizard-brain to low information bean counting to templatized regurgitated nonsense that plausibly looks like text or art. So, uh.
Maybe the fact a glorified chatbot reflects that says something about the ways we can be better people and the parts that matter most.)