Oddbean new post about | logout
 I get the hype aversion. I get the 'technology' aversion - but if you want to recognize handwriting in an image, you're not going to do it by explicitly building rules from the ground up, it just doesn't work that way.

"All" machine Learning" and AI (deep learning) is, is a defined process to minimize errors, and it uses numbers to do so. The specific numbers are essentially the rules that have been 'found' by the algorithm that minimizes error, which you don't necessicarily care about because you just want the right answer.

You could simpify all of AI and ML algorithms to playing advanced games of "hot and cold". 

- You have a bunch of pixel values that represent text,
- You make a guess "this looks a lot like an L, an i, or 1, but it looks most likely to be a 7, so I'll say that this image is a 7"

- turns out the correct answer was "T" but they write like a chicken, so I'll have to make an adjustment for next time and hopefully be correct then.