Tutorial: Complete a Kaggle Data Science Competition Fast

* Data Basics| * Use Cases & Projects | | Alivia Smith

At Dataiku, every new member of the team, either marketer or superstar data scientist, learns the Dataiku platform with the Titanic Kaggle Competition. It's such a milestone in the company that our first meeting room was named after it! So here’s a tutorial first Kaggle submission in 5 minutes!

titanic-gif

Why Kaggle?

Kaggle is a great site where companies or researchers post datasets and make it available for data scientists, statisticians, and pretty much anyone who wants to play around with it to create insights. Some focus on serious topics like fraud detection, while others are more light-hearted technical challenges like this one on generating dog images. 

Many Dataiku data scientists participate in Kaggle competitions, but a favorite is still the Titanic. Why? Because everyone can understand it: the goal of the challenge is to predict who on the Titanic will survive.

This is a relatively easy project. Nevertheless, it'll be easier after completing the three core Getting Started tutorials, or with some experience with Dataiku.

Let's Go >

 

Let's go right ahead and save some lives! Follow these steps to become a Kaggler:

Collect Data From Kaggle

Before really getting started, create an account on Kaggle. Don’t worry, these guys aren't big on spamming (at all).

Now, to begin the challenge, go to this link. This is where to get the data sources.

We only need two datasets for the project: the train dataset and the test dataset. So click on these to download them.

Now open up Dataiku Data Science Studio (or download the community edition here).

Create a new project. (Changing the project image is optional).

Titanic-tutorial-project

Getting Started

Click to import the first dataset. Upload both csv files (separately) to create both test and a train datasets.

Check out the flow. This is what it should look like:

test and train

Interlude: Test vs Train

When working on machine learning projects, there will always be a test and a train dataset. 

The train dataset is a set of incidents that have already been scored. In other words, the predicted feature is already known for each datapoint. This is the dataset that is the basis of algorithmic training (hence, the name). 

The test dataset is the dataset that the algorithm is deployed on to score the new instances. In this case, this is the dataset submitted to Kaggle. Here, it's called 'test' because it's the dataset used by Kaggle to test the results of each submission and make sure the model isn’t overfitted. In general, it'll just be the data that comes in that needs to be scored.

Tip: Both datasets have to have the same features of course. So, at Dataiku, we'll often stack them at the beginning of our data wrangling process, and then split them just before training our algorithm.

Build the ML Model

Now, click on the train dataset to explore the data.

Our goal today is to submit on Kaggle as fast as possible, so I won't go into analyzing the different features and cleaning or enriching the data. At a glance, notice that there is a unique passenger ID, and 11 features including the feature we want to predict: 'survived'.

Let's go ahead and click on Analyze to create a new analysis.

analyze

The next step is to go straight to the header of the 'survived' feature, click, and select build prediction model. Select Create Prediction Model and the model is ready to train. Now that was easy!

Screen Shot 2019-08-02 at 11.26.38 AM

Click on Train. There are a few options for predicting, but we’ll start with the simplest ones. Select Automated Machine Learning and then Quick Prototypes. And voilà - the model is done!

creating the model

By default, Dataiku trains on the random forest and logistic regression algorithms, and ranks them by performance as measured by the ROC AUC. The random forest algorithm outperforms the other here, so click on that one.

random forrest

It is interesting to have a quick look at the Variables Importance. Pretty unsurprisingly, gender is the most decisive feature, as well as how much a passenger paid and the class. As far as this model is concerned, the Titanic wasn't so much about "women and children first" as much as "rich women before rich men."

variable importance

Deploy the Model

Now that the model is built, it’s time to go ahead and deploy it. So go to the right corner and click Deploy. Keep the default name and deploy it on the train dataset.

Now there is a new step in the flow! Check out that model. ;)

deployed model

Apply the Model

The next step is to apply that model to the test dataset. In the flow, click on the model and then click on Apply Model on Data to Predict on the right. Select the test dataset and hit create recipe.

applying the model

Then, run the model with the default settings.

running the model

Then explore the output dataset test_scored. There are three new columns: a prediction column at the far right indicating whether the model predicts if that passenger survives or not, and 2 columns with the probability for each output.

test-scored

Format the Output Dataset

The final step is to prepare to submit. Kaggle requires a certain format for a submission: a csv file with 2 columns, the passenger ID, and the predicted output with specific column names.

So go ahead and create an analysis of the scored dataset. Rename the prediction column 'Survived'. Then add a step in the analysis to retain only the Passenger ID and the prediction columns.

formatting test-scored

Now that it’s in the right format, deploy the script, rename the dataset (optional), and select to build the new dataset now.

Back in the flow, click on the final dataset. On the right, click on Export and download it (in csv).

final flow

Time To Submit!

Go to this page and hit Submit Predictions to make the submission! Drag and drop that csv file and submit.

scored-submission

I got a score of 0.75598, which isn't a bad ROC AUC. And I ranked 9493th out of 11718. Okay, that's not great, but it leaves a lot of room to improve! And for five minutes of work, that’s not too bad.

Go Further:
Finish Your Passion Project

Kaggle competitions are not the only way to explore datasets and drive insights into exciting topics. See how Rebecka created her own quick codeless project analyzing UFO sightings and try leveraging Dataiku to create your own data projects fast.

Let's Go >

Other Content You May Like