Image for post
Image for post
Image by Lorenzo Cafaro from Pixabay

Kaggle, a Google subsidiary, is a community of machine learning enthusiasts. This particular project launched by Kaggle, California Housing Prices, is a data set that serves as an introduction to implementing machine learning algorithms. The main focus of this project is to help organize and understand data and graphs.

This article will discuss how to graph, organize, and set-up data using sklearn, pandas, and NumPy in reference to the Kaggle project.

I am going to be using Jupyter Labs, and the code will be based on that.

Sklearn: Sklearn is a machine learning software in Python’s library. The main features are used for statistical modeling for topics such as regression. …


Image for post
Image for post
(Image by Author)

Background Information

Q-Learning is generally deemed to be the “most simple” reinforcement learning algorithm. I find myself agreeing with this statement.

In another paper, I discussed the use of Q-Learning compared to Deep Q Networks. So, I will pull the information that discussed what Q-Learning is, the positives and negatives, and the general equation. I believe that this information is crucial background information.

Q-Learning is one of the more basic reinforcement learning algorithms; that is due to its “model-free reinforcement learning” nature. A model-free algorithm, as opposed to a model-based algorithm, has the agent learn policies directly. Like many of the other algorithms, Q-Learning has both positives and negatives [1]. As discussed upon, Q-Learning does not require a model, nor does it require a complicated system of operation. Instead, Q-Learning uses previously learned “states” which have been explored to consider future moves and stores this information in a “Q-Table.” For every action taken from a state, the policy table, Q table, has to include a positive or negative reward. The model starts with a fixed epsilon value, which represents the randomization of movements [1]. Over time, the randomization decreased based upon the epsilon decay value. …


Image for post
Image for post
Kevin Ku on Unsplash

For image recognition tasks, using pre-trained models are great. For one, they are easier to use as they give you the architecture for “free.” Additionally, they typically have better results and typically require need less training.

To see a real application of this theory, I will be using Kaggle’s CatVSDogs dataset in an attempt to discuss the results of using the different methods.

The steps will be as follows:

1) Imports2) Download and Unzip Files3) Organize the Files4) Set-up and Train Classic CNN Model 5) Test the CNN Model6) Set-up and Train Pre-Trained Model7) Test the Pre-Trained…

About

Ali Fakhry

Ali Fakhry is a high school senior with passions that relate to the field of machine learning and computer science.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store