Google colab link to the complete code is at the end of the article.
This article discusses the Tensorflow 2.x script written in Python that you can use to train almost any CSV file.
Before getting started, here is what this script Can and Cannot do
This article is a small tutorial on how to use the package
‘dfcleaner’ has many small helper functions like sanitize(), remove_outliers(), spot_irrelevant_columns() and many more.
It also has inbuilt logging capabilities so that you know exactly what changed in the pandas DataFrame before and after applying any function.
So, without any further delay, let’s get started with the tutorial
dfcleaner is available on PyPI and therefore you can install it with pip
OS X & Linux:
pip3 install dfcleaner
pip install dfcleaner
from dfcleaner import cleaner
We also need pandas to read CSV files and create DataFrames.
If you have been using pandas for a while then you might already know how to import simple CSV files. But in reality, data is stored in many many file formats. What would you do if your side project requires you to access, say, MATLAB, JSON, Stata or even hdf5 files in Python?
This article will show you how easy it is to import those files using pandas
Let’s start off with
pickle is a filetype native to python which means you cannot use the data in other programming languages.
The idea behind pickle is to serialize a given Python…
There seem to be two main types of people in the world, crosswords and sudokus. — Rebecca McKinsey
In this article, you will
constraint propagation and
search by making a program that solves Sudoku board. All of the code is written in Python3 and is available on my GitHub profile along with instructions on how to use and execute the code.
Before jumping straight into the code, lets first get familiar with some terminology and rules of Sudoku.
I’m glad to announce a brand new series where I will teach you how to approach and solve some of the most common interview questions
please do remember that all of the code will be written in python or C but you can implement the same in the language of your choice and generally interviewers will give you the freedom to do so.
Also, an interviewer always wants to know what’s going inside your mind so make sure you express any thought process you have out loud. The way I like to do this is by pretending I’m a teacher…
This guide is the first segment of the second part in the two-part series, one with Preprocessing and Exploration of Data and the other with the actual Modelling. This segment (Part-2a) of Part-2 deals with the “Machine Learning” models while the other segment (Part-2b) deals with the “Deep Learning” models. The data set that is used here came from superdatascience.com. Huge shout out to them for providing amazing courses and content on their website which motivates people like me to pursue a career in Data Science.
Don’t focus too much on the code throughout the course of this article but…
This guide is the first part in the two-part series, one with Preprocessing and Exploration of Data and the other with the actual Modelling. The data set that is used here came from superdatascience.com. Huge shout out to them for providing amazing courses and content on their website which motivates people like me to pursue a career in Data Science.
Don’t focus too much on the code throughout the course of this article but rather get the general idea of what happens during the Preprocessing stage
Also, this is a looong article so don’t forget to grab some coffee with…
Just a college student who strives to be a data scientist