Understanding & Using Kernel Regression in R & Python. Examining topics such as weighted average, kernel estimation, kernel density function, and common functions like gaussian kernel function.
In Polynomial regression, the goal was the take some columns we have and then transform them. We did this by squaring them.
In this case, what we want to do is apply what's called a kernel function on the error term itself. The idea with this case is that certain errors are basically outliers, and should have very little weight, and other errors are super important and hence should have more weight attached to them.
The error term being simply the real value of y - the predicted value of y.
This is basically automating what we did manually in the polynomial regression (https://bowtiedraptor.substack.com/p/feature-engineering-part-2-polynomial)?
both polynomial regression, and kernel regression are attempting to try to squeeze as much value as possible from our data.
One does it by tweaking the columns itself, and the other does it by changing how the underlying regression does the calculations itself.
To figure out which one you should use, you'll want to chuck both of them, and see which one performs better.
Hope this helps, let me know if there's still issues.
In Polynomial regression, the goal was the take some columns we have and then transform them. We did this by squaring them.
In this case, what we want to do is apply what's called a kernel function on the error term itself. The idea with this case is that certain errors are basically outliers, and should have very little weight, and other errors are super important and hence should have more weight attached to them.
The error term being simply the real value of y - the predicted value of y.
C Hi added Morgan Rose to the group.
what?