Perhaps a lot of weight rests on "infers" here. My bash script is not "inferring" anything: it is simply repeating something given to it. But - using the example in the paper - is a system with camera taking action based on a pixel really "inferring" that that pixel represents a person, or just that it has been programmed to take a particular action if data pass certain tests?
@29d67b21 But you have no complete definition of the tests in the AI case; the fact you don't know the relationship between pixel input and 'it's a person' is why it's inferred isn't it?
@14360743 It might be - I honestly don't know how I'd distinguish things based on that definition though.