"A few years ago, I stumbled across an ancient book on expert systems, because I find reading technology books from the 1980s enlightening. One of the points the book stressed was that an expert system had two jobs: the first was to make a decision within its area of expertise, but the second was *to be able to explain its reasoning*. The output isn't useful if it can't be justified." (from http://thedailywtf.com/articles/the-oracle-effect)
[ponders neural networks]
@typhlosion as a person who writes and uses SMT solvers a lot I think in a lot of cases this is proven to be not true. If it were we wouldnt have the microprocessors we have today - those are laid out and approved by solver. Most of my rop gadget pivots are designed by solver.
@Fuego i suppose i don't know enough about smt solvers to really make much of a comment here
@Fuego ... though i guess on some level smt solvers provide justification trivially via "these are some answers that satisfy your formula, and you can check that yourself"
expert systems for fuzzier topics still need some accountability i think
@typhlosion I mean you can always verify a decision by trying it yourself, right?
@Fuego yes, but that's less easily done when your expert system is supposed to provide answers for things like traffic light timings or crime prediction
@Fuego that's kind of a flippant answer but that's the kind of thing the original article was referring to - i guess a better term would be "decision support systems"
@typhlosion that being able to have that qualia of "understanding" will be a luxury that holds us back
@typhlosion we already are. We've been vulnerable to flawed algorithms since we started using algorithms. Haven't you ever ddone a math problem wrong? What if you were building something with that and no one else caught the mistake? same thing
@bea @typhlosion if I have to factor a quadratic, I will ALWAYS get it wrong - but I can write a solver to do it for me!!!
@Fuego @typhlosion fuckin... long division!!!!!