At the end of last year, I decided that I was going to start learning a few things starting in Q1 of 2020. We have talked some about my adventures in Android. I have insinuated about Swift. Then there is the Deep Learning. I remember my first AI class in college. Expert Systems were well on their way out but still going strong in academia. I remember thinking that being a professional “if statement” writer seemed like a sad life. Now, I know this is an oversimplification, but it was what I thought as I sat in class. Fifteen years later and now I am embarking on a journey to be a data scientist. I really hate that term. Anyway, the company I do a ton of work for needs someone to become that role and I was not paying attention when everyone else stepped back. So here is to my journey to my new life selling AI and AI accessories.
What’s are AI Accessories?
Not a King of the Hill fan I see. Well, if you pushed me on that I don’t know. Maybe something like a programming environment. You could argue successfully that Google Colabs could qualify. Maybe that should be a concept that should be explored.
Fine, so what have you learned?
Mostly, I have been playing with regression problems. I have held off the classification for the time being. I have figured out how rusty my statistics has become and learned a new respect for calculus that I never learned. The more I read, the more that is needs to be read. It is a weird thing. Normally, read one or two beginner books and you can stumble around in a technology. Read another couple intermediate books and to the lay person, you are an expert at it. Read a few advanced books and you can hang with almost everyone you will met on that topic. This has been so different.
How so?
I have been digging into the actual theory behind it. I got tickled by a specific thing in learning a simple gradient descent example. The calculation looked something like a iterating version of this:
prediction = input * weight
x = (prediction – goalPrediction) * inputValue
weight -= x
Where x is the amount and the plus or minus direction of the next iteration to modify the weight. prediction being the current value of the inputValue and the previous weight. Standing in for the correct value is goalPrediction and inputValue being the value entered into the machine in the first place.
Now what tickled me was the explanation of why the inputValue was the multiple for the next weight. It was a three prong attack.
- The first was stopping the solution. So prevent 0 from changing the value since you aren’t doing anything. The input is literally 0.
- Next was a negative reversal function on the weight. It keeps the error moving in the same direction regardless of negative or positive values. Basically without this multiple the error moves away from zero and not towards it.
- Finally, it scales it the weight. Big input means big numbers. So this will cause a big swing in the weight.
Wrapping this up. I am still wrapping my head around this stuff. Hopefully the above is correct and I am not embarrassing myself. Even if I did I just have to remember that you have to do something poorly before you do something well. Besides this is kind of like an example of AI accessories.
When I started this, I was hoping that it would be like other technologies I have learned. There is so much unfamiliar theory behind this that I am feeling a bit underwater. Don’t worry, I am not giving up. I’m just getting the feeling that this is one of those technologies you spend a year really understanding. Hopefully, this will be a fertile blogging ground.
I didn’t understand the code you did. Can you do a post about it?
Thanks for reading, I will drop it into the docket. I think it will be a good topic.