Articles about Data Science and Machine Learning | @carolinabento

Multilayer Perceptron is a Neural Network that learns the relationship between linear and non-linear data

Image by author

This is the first article in a series dedicated to Deep Learning, a group of Machine Learning methods that has its roots dating back to the 1940’s. Deep Learning gained attention in the last decades for its groundbreaking application in areas like image classification, speech recognition, and machine translation.

Stay…

Getting Started

Gradient Boosting algorithms tackle one of the biggest problems in Machine Learning: bias.

Image by author.

This is the third and last article in a series dedicated to Tree Based Algorithms, a group of widely used Supervised Machine Learning Algorithms.

The first article was about Decision Trees, while the second explored Random Forests. Everything explained with real-life examples and some Python code.

Gradient Boosting algorithms tackle…

Random Forests is a Machine Learning algorithm that tackles one of the biggest problems with Decision Trees: variance.

Image by author.

This is article number two in a series dedicated to Tree Based Algorithms, a group of widely used Supervised Machine Learning Algorithms.

The first article was about Decision Trees. The next, and last article in this series, explores Gradient Boosted Decision Trees. …

Getting Started

Decision Tree is a Supervised Machine Learning Algorithm that uses a set of rules to make decisions, similarly to how humans make decisions.

Image by author.

This is article number one in a series dedicated to Tree Based Algorithms, a group of widely used Supervised Machine Learning Algorithms.

Stay tuned if you’d like to see Decision Trees, Random Forests and Gradient Boosting Decision Trees, explained with real-life examples and some Python code.

Decision Tree is a…

Markov defined a way to represent real-world stochastic systems and processes that encode dependencies and reach a steady-state over time.

Image by Author

Andrei Markov didn’t agree with Pavel Nebrasov, when he said independence between variables was necessary for the Weak Law of Large Numbers to be applied.

The Weak Law of Large Numbers states something like this:

When you collect independent samples, as the number of samples gets bigger, the mean of…

HANDS-ON TUTORIALS

Your dog’s nap time as a regularized linear model

Image by author

When you’re building a machine learning model you’re faced with the bias-variance tradeoff, where you have to find the balance between having a model that:

  1. Is very expressive and captures the real patterns in the data.
  2. Generates predictions that are not too far off from the actual values,

A model…

Carolina Bento

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store