Oddbean new post about | logout
 Perhaps a lot of weight rests on "infers" here.

My bash script is not "inferring" anything: it is simply repeating something given to it.

But - using the example in the paper - is a system with camera taking action based on a pixel really "inferring" that that pixel represents a person, or just that it has been programmed to take a particular action if data pass certain tests? 
 @29d67b21 But you have no complete definition of the tests in the AI case; the fact you don't know the relationship between pixel input and 'it's a person' is why it's inferred isn't it? 
 @14360743 It might be - I honestly don't know how I'd distinguish things based on that definition though. 
 @29d67b21 OK, try a different one.  In the bash case, someone sits down and writes code to perform a task.  In the AI case, it's a training - there's no definition, the training process has inferred the mapping. 
 @14360743 Is that what the definition says?