thinking about GPTs
thesis: we’re obviously not to AGI yet. but people implicitly assume that a “human-equivalent intelligence” means someone knowledgeable and accurate, not “simulation of an accurate average human”
antithesis: maybe GPTs are so convincing because for most people, conversation is much more about rhythm and displays of affinity than actual content
synthesis: maybe we actually have reached human-equivalent intelligence, purely because most human conversation is *also* most-probable-statistic-response meaningless garbage
thinking about GPTs
@gardevoir (But then, if decision-making based on probabilistic garage isn't an accurate description of, eg, middle management decisions, in which most of the decisions are low information choices based on expected value and paperclip maximization, I'm not sure what is. So maybe that really does explain some peoples' lived experiences in a number of cases?)
thinking about GPTs
@gardevoir (... yet we contain myriads, and it really only helps explain and cover the laziest of decisions, from reflexive lizard-brain to low information bean counting to templatized regurgitated nonsense that plausibly looks like text or art. So, uh.
Maybe the fact a glorified chatbot reflects that says something about the ways we can be better people and the parts that matter most.)