"A few years ago, I stumbled across an ancient book on expert systems, because I find reading technology books from the 1980s enlightening. One of the points the book stressed was that an expert system had two jobs: the first was to make a decision within its area of expertise, but the second was *to be able to explain its reasoning*. The output isn't useful if it can't be justified." (from http://thedailywtf.com/articles/the-oracle-effect)
[ponders neural networks]
@typhlosion as a person who writes and uses SMT solvers a lot I think in a lot of cases this is proven to be not true. If it were we wouldnt have the microprocessors we have today - those are laid out and approved by solver. Most of my rop gadget pivots are designed by solver.
@Fuego i suppose i don't know enough about smt solvers to really make much of a comment here
@Fuego ... though i guess on some level smt solvers provide justification trivially via "these are some answers that satisfy your formula, and you can check that yourself"
expert systems for fuzzier topics still need some accountability i think
@typhlosion I mean you can always verify a decision by trying it yourself, right?
@Fuego yes, but that's less easily done when your expert system is supposed to provide answers for things like traffic light timings or crime prediction
@Fuego that's kind of a flippant answer but that's the kind of thing the original article was referring to - i guess a better term would be "decision support systems"
@typhlosion I just think in the future we'll be moving more and more towards not having justification for our most optimal solutions to problems.
@typhlosion that being able to have that qualia of "understanding" will be a luxury that holds us back
@Fuego seems like a good way to make yourself vulnerable to flawed algorithms
@typhlosion we already are. We've been vulnerable to flawed algorithms since we started using algorithms. Haven't you ever ddone a math problem wrong? What if you were building something with that and no one else caught the mistake? same thing
@Fuego @typhlosion fuckin... long division!!!!!
@bea @typhlosion if I have to factor a quadratic, I will ALWAYS get it wrong - but I can write a solver to do it for me!!!
@typhlosion and I'm not saying I'm super comfortable with the idea just that I have been in a situation where not using an unjustifiable solution would have meant I fail to solve a problem. That feels pretty weird.
@Fuego not saying everything should be justified or validated. that's trivially untenable. my point is that some problems are not so easily shown to be based on flawed axioms
increasingly, flawed business logic in mathy/engineeringy apps can be exposed with formal verification
but (e.g.) a racist bias encoded in a crime prediction decision support system like the one the article mentions is much harder to discern without introspecting into the logic behind its decisions
@typhlosion for sure - I'm just saying our only way to move forward is to have people inspecting the the translation from real world data to truth table and that's the best we'll be able to do.
@Fuego that was my original point all along. it's one of the big weaknesses of NNs in my opinion - the fact that computers shake out optimized functions for the nodes means we can't ask them to explain comprehensibly how they get from input to output. it's just a black box, which is okay for some things but leaves open a big question of accountability
@typhlosion @Fuego is problem in all black box models tho right?
devil's bargain
decision trees and statistical methods are nice if you need to know why a thing
@typhlosion like capitalism though, if we don't destroy it soon it'll become baked into the arms race of our species :/ Anyone comfortable with throwing their hands up will have a serious advantage
@Fuego anyone comfortable with throwing their hands up will, i suspect in at least some cases, eventually be destroyed by poorly formulated answers or the reactions of people slighted by them
black box models are fine, again, for some things, but there are some applications for which they are just not appropriate given what i would consider a reasonable set of priorities
@typhlosion If you're smart enough to create a good NN you should be smart enough to know when it's appropriate to apply it.
At least in my experience the only people who come up with stupid times to apply them are mercifully completely unable to create them.
Thats the real human skill involved.
@Fuego we live in a world where someone conceived of, got funding for, created, marketed, and sold a $400 IoT proprietary cold press juicer, the functionality of which can be simulated using one's bare hands. i have a hard time believing knowing how to make something implies knowing whether it is appropriate to do so
@typhlosion I just dont think it took a whole slew of brain cells to make a juice press
@Fuego there's also this story:
http://thedailywtf.com/articles/No,_We_Need_a_Neural_Network
obviously it's played for humor and i can't vouch for whether it's even remotely true or not but i think the behavior exhibited by the characters is plausible
@typhlosion its just that creating the rules for a system requires an intimate knowledge of the problem space, because you define feedback to the system, its not trying it for real like evolution.
Making a juice press doesnt require intimate knowledge of the marketplace.
If that story is true they seriously fudged the time scale, number of people working on it, and the client was a government or google.
@typhlosion *gives hugs and pets u*
@Fuego man, i dunno. maybe i'm just a hopeless idealist. maybe something something incompleteness something something halting problem means we should just throw our fucking hands up and accept that we're eventually never gonna understand any of the solutions we built our machines to give us
@typhlosion reasons we should probably destroy all computers and the internet asap
@typhlosion Sorry :/
@Fuego not your fault, i need to not put my foot in my mouth
do you ever regret making a post