What is correlation?
In statistics, dependence refers to any statistical relationship between two random variables or two sets of data. Correlation refers to any of a broad class of statistical relationships involving dependence.
In laymans terms, correlation is a relationships between data attributes. For a quick refresher, in data mining, a dataset is made up of different attributes. We use these attributes to classify or predict a label. Some attributes have more "meaning" or influence over the label's value. As you can imagine, if you can determine the influence that specific attributes have over your data, you are in a better position to build a classification model because you will know which attributes you should focus on when building your model.
In this example, I will use the kaggle.com Titanic datamining challenge dataset. This post will not uncover any information that is not readily available in the tutorial posted on kaggle.com.
Here are two screenshots. The first screenshot will show you some statistics about the dataset. The second screenshot will show a sample of the data.
Meta data view of the Titanic data mining challenge Training dataset
A data view of the dataset
The correlation matrix
First start by importing the Titanic training dataset into RapidMiner. You can use Read From CSV, Read From Excel, or Read from Database to achieve this step. Next, search for the "Correlation Matrix" operator and drag it onto the process surface. Connect the Titanic training dataset output port to the Correlation Matrix operator's input example port. Your process should look like this.
Now run the process and observe the output.
You are presented with several different result views. The first view will be the Correlation Matrix Attribute Weights view. The Attribute weights view displays the "weight" of each attribute. The purpose of this tutorial is to explain a different view of the Correlation matrix. Click on the Correlation Matrix view. This is a matrix that shows the Correlation Coefficients which is a measure of the strength of the relationship between our attributes. An easy way to get started with the Correlation matrix is to notice that when an attribute intersects with itself, you have a dark blue cell with the value of 1 which represents the strongest possible value. This is because any attribute matched with itself is a perfect correlation. A correlation coefficient value can be positive or negative. A negative value does not necessarily mean there is less of a relationship between the values represented. The larger the coefficient in either direction represents a strong relationship between those two attributes. If we look at the matrix and follow along the top row (survived) we will see the attributes that have the strongest correlation with the label in which we are trying to predict.
Just as the kaggle.com tutorial specifies, the attributes with the strongest correlation with the label (survived) are
sex(0.295), pclass(0.115), and fare(0.66)
Remember that the value as well as the color will help you to visually identify the stronger correlation between attributes.
If you are working with a classification problem, I'm sure you can see how valuable the correlation matrix can be in showing you the relationships between your label and attributes. Such insights let can provide a great start on where to focus your attention when building your classification model.
Thanks for reading and keep your eyes open for my next tutorial!
Tips and tricks. Tip #1 How to use SQL Server named instances with RapidMiner Read/Write to database operators
Hello and welcome to my first of many tips and tricks for RapidMiner. If you are unfamiliar with RapidMiner, it's a Open Source Java based data mining solution. You can visit the official RapidMiner website by clicking here. My plan is to write a short article to provide solutions to problems that I encounter as I learn more about this awesome application.
RapidMiner and database connectivity
There are many operators in RapidMiner that take input data sets and generate models for prediction and analysis. Often, you will want to write the result set of the model to a database. To do this you use the "Write Database" operator.
I was using RapidMiner for web mining by way of the Crawl Web operator. The Example set output of the Crawl Web operator was connected to the input of the Write Database operator. At the time I was using a SQL Server database that I pay for through my web hosting account. Just like most everything in RapidMiner, the setup was easy and worked like a charm. My database size quota was 200MB with my current hosting plan and it became apparent to me that I would quickly run out of space. As such, I decided to use the local SQL Express 2012 named instanced on my machine. This is where the problem was introduced. I couldn't figure out how to successfully setup the database connection in RapidMiner.
RapidMiner, Named Instances, and Integrated Security
The issues that I encountered when trying to setup my local SQL Server 2012 named instanced were as follows:
If I used the named instance for the server name(localhost\SQLExpress), I was unable to connect. I didn't encounter this problem with my hosting server's database because it was a direct hostname (xxx.sqlserverdb.com). There was no instance name and so the configuration was easy.
I wasn't sure how to specify integrated security as this is something that you usually specify in the connection string. I didn't encounter this problem either using my hosting database server because I was given a user name and password to connect to the server.
After some research and banging my head against my laptop, I finally figured out the resolution to my problems and I'm here to save someone else the headache.
For the named instance issue, there is a trick that is not readily apparent to get this to work. You set your database server name as per usual, in my case, localhost, however, when you specify the database name, you include a semicolon (;) followed by instance=<instance name>. So for my local server instance (localhost\sqlexpress), I set the Host value to localhost and the Database scheme value to mydatabasename;instance=sqlexpress .
As far as the integrated security requirement, all you need to do is make sure that you have the latest JTDS SQL Server driver from here. Once you download the zip file, you'll need to extract the file jtds-1.3.0-dist.zip\x86\SSO\ntlmauth.dll and place it in your windows\system32 directory. This will insure that you have the driver with the capabilities of using the integrated security. Once this file is in place, you simply leave the username and password values blank. Here is a screen shot of the Manage Database Connections window in RapidMiner for your reference.
Well that about wraps it up. Please leave a comment if you have any questions.
Until next time,
Greetings! And welcome to another wam bam, thank you ma'am, mind blowing, flex showing, machine learning tutorial here at refactorthis.net!
This tutorial is based on a machine learning toolkit called RapidMiner by RapidI. RapidMiner is a full featured Java based open source machine learning toolkit with support for all of the popular machine learning algorithms used in data analytics today. The library supports supports the following machine learning algorithms (to name a few):
Naive Bayes (kernel)
Decision Tree (Weight-based, Multiway)
Vector Linear Regression
Support Vector Machine (Linear, Evolutionary, PSO)
k-Means (kernel, fast)
And much much more!!
Excited yet? I thought so!
How to create a decision tree using RapidMiner
When I first ran across screen shots of RapidMiner online, I thought to myself, "Oh boy.. I wonder how much this is going to cost...". The UI looked so amazing. It's like Visual Studio for Data Mining and Machine learning! Much to my surprise, I found out that the application is open source and free!
Here is a quote from the RapidMiner site:
RapidMiner is unquestionably the world-leading open-source system for data mining. It is available as a stand-alone application for data analysis and as a data mining engine for the integration into own products. Thousands of applications of RapidMiner in more than 40 countries give their users a competitive edge.
I've been trying some machine learning "challenges" recently to sharpen my skills as a data scientist, and I decided to use RapidMiner to tackle the kaggle.com machine learning challenge called "Titanic: Machine Learning from Disaster" . The data set is a CSV file that contains information on many of the passengers of the infamous Titanic voyage. The goal of the challenge is to take one CSV file containing training data (the training data contains all attributes as well as the label Survived) and a testing data file containing only the attributes (no Survived label) and to predict the Survived label of the testing set based on the training set.
Warning: Although I'm not going to provide the complete solution to this challenge, I warn you, if you are working on this challenge, then you should probably stop reading this tutorial. I do provide some insights into the survival data found in the training data set. It's best to try to work the challenge out on your own. After all, we learn by TRYING, FAILING, TRYING AGAIN, THEN SUCCEEDING. I'd also like to say that I'm going to do my very best to go easy on the THEORY of this post.. I know that some of my readers like to get straight to the action :) You have been warned..
Why a decision tree?
A decision tree model is a great way to visualize a data set to determine which attributes of a data set influenced a particular classification (label). A decision tree looks like a tree with branches, flipped upside down.. Perhaps a (cheesy) image will illustrate..
After you are finished laughing at my drawing, we may proceed....... OK
In my example, imagine that we have a data set that has data that is related to lifestyle and heart disease. Each row has a person, their sex, age, Smoker (y/n), Diet (good/poor), and a label Risk (Less Risk/More Risk). The data indicates that the biggest influence on Risk turns out to be the Smoker attribute. Smoker becomes the first branch in our tree. For Smokers, the next influencial attribute happens to be Age, however, for non smokers, the data indicates that their diet has a bigger influence on the risk. The tree will branch into two different nodes until the classification os reached or the maximum "depth" that we establish is reached. So as you can see, a decision tree can be a great way to visualize how a decision is derived based on the attributes in your data.
RapidMiner and data modeling
Ready to see how easy it is to create a prediction model using RapidMiner? I thought so!
Create a new process
When you are working in RapidMiner, your project is known as a process. So we will start by running RapidMiner and creating a new process.
The version of RapidMiner used in this tutorial is version 5.3. Once the application is open, you will be presented with the following start screen.
From this screen you will click on New Process
You are presented with the main user interface for RapidMiner. One of the most compelling aspects of Rapidminer is it's ease of use and intuitive user interface. The basic flow of this process is as follows:
Import your test and training data from CSV files into your RapidMiner repository. This can be found in the repository menu under Import CSV file
Once your data has been imported into your repository, the datasets can be dragged onto your process surface for you to apply operators
You will add your training data to the process
Next, you will add your testing data to the process
Search the operators for Decision Tree and add the operator
In order to use your training data to generate a prediction on your testing data using the Decision Tree model, we will add an "Apply Model" operator to the process. This operator has an input that you will associate with the output model of your Decision Tree operator. There is also an input that takes "unlearned" data from the output of your testing dataset.
You will attach the outputs of Apply Model to the results connectors on the right side of the process surface.
Once you have designed your model, RapidMiner will show you any problems with your process and will offer "Quick fixes" if they exists that you can double click to resolve.
Once all problems have been resolved, you can run your process and you will see the results that you wired up to the results side of the process surface.
Here are screenshots of the entire process for your review
Add the training data from the repository by dragging and dropping the dataset that you imported from your CSV file
Repeat the process and add the testing data underneath the training data
Now you can search in the operators window for Decision Tree operator. Add it to your process.
The way that you associate the inputs and outputs of operators and data sets is by clicking on the output of one item and connecting it by clicking on the input of another item. Here we are connecting the output of the training dataset to the input of the Decision Tree operator.
Next we will add the Apply model operator
Then we will create the appropriate connections for the model
Observe the quick fixes in the problems window at the bottom.. you can double click the quick fixes to resolve the issues.
You will be prompted to make a simple decision regarding the problem that was detected. Once you resolve one problem, other problems may appear. be sure to resolve all problems so that you can run your process.
Here is the process after resolving all problems.
Next, I select the decision tree operator and I adjust the following parameters:
Maximum Depth: change from 20 to 5.
check both boxes to make sure that the tree is not "pruned".
Once this has been done, you can Run your process and observe the results. Since we connected both the model as well as the labeled result to the output connectors of the process, we are presented with a visual display of our Decision Tree (model) as well as the Test data set with the prediction applied.
(Decision Tree Model)
(The example test result set with the predictions applied)
As you can see, RapidMiner makes complex data analysis and machine learning tasks extremely easy with very little effort.
This concludes my tutorial on creating Decision Trees in RapidMiner.
Until next time,