Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I work with ML regularly, and there is always something new to learn!

Another commenter mentioned L1 regularization, which is useful for linear regression. You wouldn't use it for all classes of problems. L1 regularization has to do with minimizing error of absolute values, instead of squared errors or similar.

I skimmed this article and thinks it's accessible: https://www.kdnuggets.com/2021/06/feature-selection-overview...

PCA is a form of dimensionality reduction, but it doesn't select features for you.



I'm probably nitpicking your language, but L1 regularization is precisely that: regularization. (See https://en.wikipedia.org/wiki/Regularization_(mathematics)#R....) In your typical linear regression setting, it does not replace the squared error loss but rather augments it. In regularized linear regression, for example, your loss function becomes a weighted sum of the usual squared error loss (aiming to minimize residuals/maximize model fit) and the norm of the vector of estimated coefficients (aiming to minimize model complexity).


Hey, I appreciate your correction! I wrote my comment late and night and definitely mashed the details. Your nice "nitpick" is a much needed correction to my inaccuracy.


> L1 regularization has to do with minimizing error of absolute values

Not quite. It has to do with minimizing the sum of absolute values of the coefficients, not the error. The squared error is still the "fidelity" term in the cost function.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: