"A few years ago, I stumbled across an ancient book on expert systems, because I find reading technology books from the 1980s enlightening. One of the points the book stressed was that an expert system had two jobs: the first was to make a decision within its area of expertise, but the second was *to be able to explain its reasoning*. The output isn't useful if it can't be justified." (from http://thedailywtf.com/articles/the-oracle-effect)
[ponders neural networks]
@typhlosion as a person who writes and uses SMT solvers a lot I think in a lot of cases this is proven to be not true. If it were we wouldnt have the microprocessors we have today - those are laid out and approved by solver. Most of my rop gadget pivots are designed by solver.
@Fuego i suppose i don't know enough about smt solvers to really make much of a comment here
@Fuego ... though i guess on some level smt solvers provide justification trivially via "these are some answers that satisfy your formula, and you can check that yourself"
expert systems for fuzzier topics still need some accountability i think
@typhlosion I mean you can always verify a decision by trying it yourself, right?
@Fuego yes, but that's less easily done when your expert system is supposed to provide answers for things like traffic light timings or crime prediction
@Fuego that's kind of a flippant answer but that's the kind of thing the original article was referring to - i guess a better term would be "decision support systems"
@typhlosion that being able to have that qualia of "understanding" will be a luxury that holds us back
@Fuego seems like a good way to make yourself vulnerable to flawed algorithms
@typhlosion and I'm not saying I'm super comfortable with the idea just that I have been in a situation where not using an unjustifiable solution would have meant I fail to solve a problem. That feels pretty weird.
@Fuego not saying everything should be justified or validated. that's trivially untenable. my point is that some problems are not so easily shown to be based on flawed axioms
increasingly, flawed business logic in mathy/engineeringy apps can be exposed with formal verification
but (e.g.) a racist bias encoded in a crime prediction decision support system like the one the article mentions is much harder to discern without introspecting into the logic behind its decisions
@typhlosion reasons we should probably destroy all computers and the internet asap