Handling a combined dataset of numerical and categorical features for Regression Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 00:00UTC (8:00pm US/Eastern) 2019 Moderator Election Q&A - Questionnaire 2019 Community Moderator Election ResultsRecurrent neural network multiple types of input KerasQ: xgboost regressor training on a large number of indicator variables results in same prediction for all rows in testHow do I define my feature variablesPreparing, Scaling and Selecting from a combination of numerical and categorical featuresPredicting with categorical dataHow can I make a prediction in a regression model if a category has not been observed already?Dropping less frequently used categorical data?ML Models: How to handle categorical feature with over 1000 unique valuesHow to measure correlation between several categorical features and a numerical label in Python?
How does light 'choose' between wave and particle behaviour?
Why is it faster to reheat something than it is to cook it?
How to install press fit bottom bracket into new frame
Is it fair for a professor to grade us on the possession of past papers?
How were pictures turned from film to a big picture in a picture frame before digital scanning?
Do any jurisdictions seriously consider reclassifying social media websites as publishers?
What is the topology associated with the algebras for the ultrafilter monad?
Is grep documentation about ignoring case wrong, since it doesn't ignore case in filenames?
Using et al. for a last / senior author rather than for a first author
Take 2! Is this homebrew Lady of Pain warlock patron balanced?
SF book about people trapped in a series of worlds they imagine
What was the first language to use conditional keywords?
Morning, Afternoon, Night Kanji
Would the Life Transference spell be unbalanced if it ignored resistance and immunity?
Amount of permutations on an NxNxN Rubik's Cube
How often does castling occur in grandmaster games?
How do living politicians protect their readily obtainable signatures from misuse?
Why is Nikon 1.4g better when Nikon 1.8g is sharper?
What is the difference between globalisation and imperialism?
Did Deadpool rescue all of the X-Force?
An adverb for when you're not exaggerating
How would a mousetrap for use in space work?
Should I follow up with an employee I believe overracted to a mistake I made?
Sum letters are not two different
Handling a combined dataset of numerical and categorical features for Regression
Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 23, 2019 at 00:00UTC (8:00pm US/Eastern)
2019 Moderator Election Q&A - Questionnaire
2019 Community Moderator Election ResultsRecurrent neural network multiple types of input KerasQ: xgboost regressor training on a large number of indicator variables results in same prediction for all rows in testHow do I define my feature variablesPreparing, Scaling and Selecting from a combination of numerical and categorical featuresPredicting with categorical dataHow can I make a prediction in a regression model if a category has not been observed already?Dropping less frequently used categorical data?ML Models: How to handle categorical feature with over 1000 unique valuesHow to measure correlation between several categorical features and a numerical label in Python?
$begingroup$
I have a dataset that has a large number of categorical features and a few numerical features, and I want to predict the probability that any given input is one of two types of a certain binary output feature. I don't want to solve a classifier problem for reasons that are well outlined in this link .
I have seen many tutorials about how to handle them independently but am less sure how to handle them together.
python regression pandas
$endgroup$
bumped to the homepage by Community♦ 21 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
add a comment |
$begingroup$
I have a dataset that has a large number of categorical features and a few numerical features, and I want to predict the probability that any given input is one of two types of a certain binary output feature. I don't want to solve a classifier problem for reasons that are well outlined in this link .
I have seen many tutorials about how to handle them independently but am less sure how to handle them together.
python regression pandas
$endgroup$
bumped to the homepage by Community♦ 21 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
add a comment |
$begingroup$
I have a dataset that has a large number of categorical features and a few numerical features, and I want to predict the probability that any given input is one of two types of a certain binary output feature. I don't want to solve a classifier problem for reasons that are well outlined in this link .
I have seen many tutorials about how to handle them independently but am less sure how to handle them together.
python regression pandas
$endgroup$
I have a dataset that has a large number of categorical features and a few numerical features, and I want to predict the probability that any given input is one of two types of a certain binary output feature. I don't want to solve a classifier problem for reasons that are well outlined in this link .
I have seen many tutorials about how to handle them independently but am less sure how to handle them together.
python regression pandas
python regression pandas
asked May 21 '18 at 12:49
dward4dward4
112
112
bumped to the homepage by Community♦ 21 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
bumped to the homepage by Community♦ 21 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
add a comment |
add a comment |
3 Answers
3
active
oldest
votes
$begingroup$
I don't want to solve a classifier problem for reasons that are well outlined in this link.
I doubt that the link wants to tell you to stop performing classification tasks - the problem you propose is a classical example for classification. How I understand your source, it does not want you to use scoring rules as a heuristic.
For the problem you described I would propose a simple Naive Bayes approach. To make your numerical values discrete, you can simply use the mean of two adjacent numeric values as a treshold. E.g. for a list [1, 2] of numeric values, just split them at a threshold of 1.5 and check above and below.
$endgroup$
add a comment |
$begingroup$
One could approach this in two general ways:
1) bottom up: thinking about unifying the data somehow to begin with
2) top down: deciding how the data needs to look based on the final model you wish to use
Do you already know which model you will use? If that is fixed (for whatever reason), you already know you need to get your data into the correct form, be it numerical or categorical.
As you pinned your question with the tag regression
, I can tell you that you need to make your data all numerical, so regression can work.
An example of making numerical data categorical would be to put it into bins. Imagine we have values ranging from zero to ten: [0.173, 7.88, 3.91, ...]. You could simply say that values between 0.00 and 0.99 are category A, values between 1.00 and 1.99 are category B, and so on.
[Edit:]
A slightly more sophisticated way of defining the bins to use would be to define the bins based on some characteristic statistics of your dataset. For example, have a look at the possible ways possible implemented within python's Numpy
. Of the available methods there, I have found the Doane method to work best - it will depend on your data though, so read the descriptions.
Making categorical values numerical in a meaningful way depends a little more on you data. It is easy to make them numberic, but you should focus on doing it in such a way as to retain as much of the information each variable contains as well as the relative relationships between each of the categories that you started with. E.g. converting colours into integers would allow you to perform regression, but if yellow becomes 1 and purple 10, the model needs to be able to learn that purple isn't necessarily 10 times bigger than yellow, and that is difficult in the context of regression!
$endgroup$
$begingroup$
Great, currently I'm using 1 hot encoding on my columns to create a larger data fame of binary variables. Thanks for the indepth way of thinking through it.
$endgroup$
– dward4
May 21 '18 at 14:15
$begingroup$
@dward4 - you're welcome :) - have a look at the extra information I added regarding the use of histogram methods.
$endgroup$
– n1k31t4
May 21 '18 at 15:00
add a comment |
$begingroup$
Adding to the answers of above, there's one more better way to do the same i.e Target Encoding, which naively means you are e encoding your cats according to the target variable via using some aggregate (works out of the box)
$endgroup$
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f31926%2fhandling-a-combined-dataset-of-numerical-and-categorical-features-for-regression%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
I don't want to solve a classifier problem for reasons that are well outlined in this link.
I doubt that the link wants to tell you to stop performing classification tasks - the problem you propose is a classical example for classification. How I understand your source, it does not want you to use scoring rules as a heuristic.
For the problem you described I would propose a simple Naive Bayes approach. To make your numerical values discrete, you can simply use the mean of two adjacent numeric values as a treshold. E.g. for a list [1, 2] of numeric values, just split them at a threshold of 1.5 and check above and below.
$endgroup$
add a comment |
$begingroup$
I don't want to solve a classifier problem for reasons that are well outlined in this link.
I doubt that the link wants to tell you to stop performing classification tasks - the problem you propose is a classical example for classification. How I understand your source, it does not want you to use scoring rules as a heuristic.
For the problem you described I would propose a simple Naive Bayes approach. To make your numerical values discrete, you can simply use the mean of two adjacent numeric values as a treshold. E.g. for a list [1, 2] of numeric values, just split them at a threshold of 1.5 and check above and below.
$endgroup$
add a comment |
$begingroup$
I don't want to solve a classifier problem for reasons that are well outlined in this link.
I doubt that the link wants to tell you to stop performing classification tasks - the problem you propose is a classical example for classification. How I understand your source, it does not want you to use scoring rules as a heuristic.
For the problem you described I would propose a simple Naive Bayes approach. To make your numerical values discrete, you can simply use the mean of two adjacent numeric values as a treshold. E.g. for a list [1, 2] of numeric values, just split them at a threshold of 1.5 and check above and below.
$endgroup$
I don't want to solve a classifier problem for reasons that are well outlined in this link.
I doubt that the link wants to tell you to stop performing classification tasks - the problem you propose is a classical example for classification. How I understand your source, it does not want you to use scoring rules as a heuristic.
For the problem you described I would propose a simple Naive Bayes approach. To make your numerical values discrete, you can simply use the mean of two adjacent numeric values as a treshold. E.g. for a list [1, 2] of numeric values, just split them at a threshold of 1.5 and check above and below.
answered May 21 '18 at 13:51
AndréAndré
41810
41810
add a comment |
add a comment |
$begingroup$
One could approach this in two general ways:
1) bottom up: thinking about unifying the data somehow to begin with
2) top down: deciding how the data needs to look based on the final model you wish to use
Do you already know which model you will use? If that is fixed (for whatever reason), you already know you need to get your data into the correct form, be it numerical or categorical.
As you pinned your question with the tag regression
, I can tell you that you need to make your data all numerical, so regression can work.
An example of making numerical data categorical would be to put it into bins. Imagine we have values ranging from zero to ten: [0.173, 7.88, 3.91, ...]. You could simply say that values between 0.00 and 0.99 are category A, values between 1.00 and 1.99 are category B, and so on.
[Edit:]
A slightly more sophisticated way of defining the bins to use would be to define the bins based on some characteristic statistics of your dataset. For example, have a look at the possible ways possible implemented within python's Numpy
. Of the available methods there, I have found the Doane method to work best - it will depend on your data though, so read the descriptions.
Making categorical values numerical in a meaningful way depends a little more on you data. It is easy to make them numberic, but you should focus on doing it in such a way as to retain as much of the information each variable contains as well as the relative relationships between each of the categories that you started with. E.g. converting colours into integers would allow you to perform regression, but if yellow becomes 1 and purple 10, the model needs to be able to learn that purple isn't necessarily 10 times bigger than yellow, and that is difficult in the context of regression!
$endgroup$
$begingroup$
Great, currently I'm using 1 hot encoding on my columns to create a larger data fame of binary variables. Thanks for the indepth way of thinking through it.
$endgroup$
– dward4
May 21 '18 at 14:15
$begingroup$
@dward4 - you're welcome :) - have a look at the extra information I added regarding the use of histogram methods.
$endgroup$
– n1k31t4
May 21 '18 at 15:00
add a comment |
$begingroup$
One could approach this in two general ways:
1) bottom up: thinking about unifying the data somehow to begin with
2) top down: deciding how the data needs to look based on the final model you wish to use
Do you already know which model you will use? If that is fixed (for whatever reason), you already know you need to get your data into the correct form, be it numerical or categorical.
As you pinned your question with the tag regression
, I can tell you that you need to make your data all numerical, so regression can work.
An example of making numerical data categorical would be to put it into bins. Imagine we have values ranging from zero to ten: [0.173, 7.88, 3.91, ...]. You could simply say that values between 0.00 and 0.99 are category A, values between 1.00 and 1.99 are category B, and so on.
[Edit:]
A slightly more sophisticated way of defining the bins to use would be to define the bins based on some characteristic statistics of your dataset. For example, have a look at the possible ways possible implemented within python's Numpy
. Of the available methods there, I have found the Doane method to work best - it will depend on your data though, so read the descriptions.
Making categorical values numerical in a meaningful way depends a little more on you data. It is easy to make them numberic, but you should focus on doing it in such a way as to retain as much of the information each variable contains as well as the relative relationships between each of the categories that you started with. E.g. converting colours into integers would allow you to perform regression, but if yellow becomes 1 and purple 10, the model needs to be able to learn that purple isn't necessarily 10 times bigger than yellow, and that is difficult in the context of regression!
$endgroup$
$begingroup$
Great, currently I'm using 1 hot encoding on my columns to create a larger data fame of binary variables. Thanks for the indepth way of thinking through it.
$endgroup$
– dward4
May 21 '18 at 14:15
$begingroup$
@dward4 - you're welcome :) - have a look at the extra information I added regarding the use of histogram methods.
$endgroup$
– n1k31t4
May 21 '18 at 15:00
add a comment |
$begingroup$
One could approach this in two general ways:
1) bottom up: thinking about unifying the data somehow to begin with
2) top down: deciding how the data needs to look based on the final model you wish to use
Do you already know which model you will use? If that is fixed (for whatever reason), you already know you need to get your data into the correct form, be it numerical or categorical.
As you pinned your question with the tag regression
, I can tell you that you need to make your data all numerical, so regression can work.
An example of making numerical data categorical would be to put it into bins. Imagine we have values ranging from zero to ten: [0.173, 7.88, 3.91, ...]. You could simply say that values between 0.00 and 0.99 are category A, values between 1.00 and 1.99 are category B, and so on.
[Edit:]
A slightly more sophisticated way of defining the bins to use would be to define the bins based on some characteristic statistics of your dataset. For example, have a look at the possible ways possible implemented within python's Numpy
. Of the available methods there, I have found the Doane method to work best - it will depend on your data though, so read the descriptions.
Making categorical values numerical in a meaningful way depends a little more on you data. It is easy to make them numberic, but you should focus on doing it in such a way as to retain as much of the information each variable contains as well as the relative relationships between each of the categories that you started with. E.g. converting colours into integers would allow you to perform regression, but if yellow becomes 1 and purple 10, the model needs to be able to learn that purple isn't necessarily 10 times bigger than yellow, and that is difficult in the context of regression!
$endgroup$
One could approach this in two general ways:
1) bottom up: thinking about unifying the data somehow to begin with
2) top down: deciding how the data needs to look based on the final model you wish to use
Do you already know which model you will use? If that is fixed (for whatever reason), you already know you need to get your data into the correct form, be it numerical or categorical.
As you pinned your question with the tag regression
, I can tell you that you need to make your data all numerical, so regression can work.
An example of making numerical data categorical would be to put it into bins. Imagine we have values ranging from zero to ten: [0.173, 7.88, 3.91, ...]. You could simply say that values between 0.00 and 0.99 are category A, values between 1.00 and 1.99 are category B, and so on.
[Edit:]
A slightly more sophisticated way of defining the bins to use would be to define the bins based on some characteristic statistics of your dataset. For example, have a look at the possible ways possible implemented within python's Numpy
. Of the available methods there, I have found the Doane method to work best - it will depend on your data though, so read the descriptions.
Making categorical values numerical in a meaningful way depends a little more on you data. It is easy to make them numberic, but you should focus on doing it in such a way as to retain as much of the information each variable contains as well as the relative relationships between each of the categories that you started with. E.g. converting colours into integers would allow you to perform regression, but if yellow becomes 1 and purple 10, the model needs to be able to learn that purple isn't necessarily 10 times bigger than yellow, and that is difficult in the context of regression!
edited May 21 '18 at 14:59
answered May 21 '18 at 13:45
n1k31t4n1k31t4
6,5612421
6,5612421
$begingroup$
Great, currently I'm using 1 hot encoding on my columns to create a larger data fame of binary variables. Thanks for the indepth way of thinking through it.
$endgroup$
– dward4
May 21 '18 at 14:15
$begingroup$
@dward4 - you're welcome :) - have a look at the extra information I added regarding the use of histogram methods.
$endgroup$
– n1k31t4
May 21 '18 at 15:00
add a comment |
$begingroup$
Great, currently I'm using 1 hot encoding on my columns to create a larger data fame of binary variables. Thanks for the indepth way of thinking through it.
$endgroup$
– dward4
May 21 '18 at 14:15
$begingroup$
@dward4 - you're welcome :) - have a look at the extra information I added regarding the use of histogram methods.
$endgroup$
– n1k31t4
May 21 '18 at 15:00
$begingroup$
Great, currently I'm using 1 hot encoding on my columns to create a larger data fame of binary variables. Thanks for the indepth way of thinking through it.
$endgroup$
– dward4
May 21 '18 at 14:15
$begingroup$
Great, currently I'm using 1 hot encoding on my columns to create a larger data fame of binary variables. Thanks for the indepth way of thinking through it.
$endgroup$
– dward4
May 21 '18 at 14:15
$begingroup$
@dward4 - you're welcome :) - have a look at the extra information I added regarding the use of histogram methods.
$endgroup$
– n1k31t4
May 21 '18 at 15:00
$begingroup$
@dward4 - you're welcome :) - have a look at the extra information I added regarding the use of histogram methods.
$endgroup$
– n1k31t4
May 21 '18 at 15:00
add a comment |
$begingroup$
Adding to the answers of above, there's one more better way to do the same i.e Target Encoding, which naively means you are e encoding your cats according to the target variable via using some aggregate (works out of the box)
$endgroup$
add a comment |
$begingroup$
Adding to the answers of above, there's one more better way to do the same i.e Target Encoding, which naively means you are e encoding your cats according to the target variable via using some aggregate (works out of the box)
$endgroup$
add a comment |
$begingroup$
Adding to the answers of above, there's one more better way to do the same i.e Target Encoding, which naively means you are e encoding your cats according to the target variable via using some aggregate (works out of the box)
$endgroup$
Adding to the answers of above, there's one more better way to do the same i.e Target Encoding, which naively means you are e encoding your cats according to the target variable via using some aggregate (works out of the box)
answered May 21 '18 at 16:06
AdityaAditya
1,4341627
1,4341627
add a comment |
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f31926%2fhandling-a-combined-dataset-of-numerical-and-categorical-features-for-regression%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown