Single machine learning algorithm for multiple classes of data : One hot encoder Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30 pm US/Eastern) 2019 Moderator Election Q&A - Questionnaire 2019 Community Moderator Election ResultsStruggling to integrate sklearn and pandas in simple Kaggle taskMinimize correlation between input and output of black box systemSKNN regression problemConsistently inconsistent cross-validation results that are wildly different from original model accuracyDetermine accuracy of model on train data with Pandas DataFrameConverting Json file to Dataframe PythonTensorflow regression predicting 1 for all inputsBest model for Machine LearningHow to statistically prove that a column in a dataframe is not neededMerging dataframes in Pandas is taking a surprisingly long time

Why would the Overseers waste their stock of slaves on the Game?

Simulate round-robin tournament draw

Where to find documentation for `whois` command options?

How long can a nation maintain a technological edge over the rest of the world?

Retract an already submitted Recommendation Letter (written for an undergrad student)

How did Elite on the NES work?

Coin Game with infinite paradox

How can I wire a 9-position switch so that each position turns on one more LED than the one before?

Israeli soda type drink

Why did Europeans not widely domesticate foxes?

What was Apollo 13's "Little Jolt" after MECO?

Why is water being consumed when my shutoff valve is closed?

RIP Packet Format

Is Bran literally the world's memory?

My admission is revoked after accepting the admission offer

Bright yellow or light yellow?

Was Objective-C really a hindrance to Apple software development?

France's Public Holidays' Puzzle

How do I deal with an erroneously large refund?

/bin/ls sorts differently than just ls

What does こした mean?

Variable does not exist: sObjectType (Task.sObjectType)

What is ls Largest Number Formed by only moving two sticks in 508?

What is a 'Key' in computer science?



Single machine learning algorithm for multiple classes of data : One hot encoder



Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30 pm US/Eastern)
2019 Moderator Election Q&A - Questionnaire
2019 Community Moderator Election ResultsStruggling to integrate sklearn and pandas in simple Kaggle taskMinimize correlation between input and output of black box systemSKNN regression problemConsistently inconsistent cross-validation results that are wildly different from original model accuracyDetermine accuracy of model on train data with Pandas DataFrameConverting Json file to Dataframe PythonTensorflow regression predicting 1 for all inputsBest model for Machine LearningHow to statistically prove that a column in a dataframe is not neededMerging dataframes in Pandas is taking a surprisingly long time










0












$begingroup$


I have data of the following kind:



 x1 x2 y
0 0 1 1
1 0 2 2
2 0 3 3
3 0 4 4
4 1 1 4
5 1 2 8
6 1 3 12
7 1 4 16


Is it possible to construct a single machine learning algorithm in python/scikit-learn by defining column x1 in such a way that a simple linear regression should give predict(x1=0, x2=5) = 5 and predict(x1=1, x2=5) = 20. My actual problem has multiple values of x1.



To illustrate the problem better: I have the following code with one hot encoder and it doesn't seem to give the accuracy of training the data separately.



import pandas as pd
from sklearn.linear_model import LinearRegression

# Dataframe with x1 = 0 and linear regression gives a slope of 1 as expected

df = pd.DataFrame(data=['x1': 0, 'x2': 1, 'y': 1,
'x1': 0, 'x2': 2, 'y': 2,
'x1': 0, 'x2': 3, 'y': 3,
'x1': 0, 'x2': 4, 'y': 4
],
columns=['x1', 'x2', 'y'])

X = df[['x1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[0, 5]]))) # Output is 5 as expected

# Dataframe with x1 = 1 and linear regression gives a slope of 5 as expected

df = pd.DataFrame(data=['x1': 1, 'x2': 1, 'y': 4,
'x1': 1, 'x2': 2, 'y': 8,
'x1': 1, 'x2': 3, 'y': 12,
'x1': 1, 'x2': 4, 'y': 16
],
columns=['x1', 'x2', 'y'])

X = df[['x1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[1, 5]]))) # Output is 20 as expected

# Combine the two data frames x1 = 0 and x1 = 1

df = pd.DataFrame(data=['x1': 0, 'x2': 1, 'y': 1,
'x1': 0, 'x2': 2, 'y': 2,
'x1': 0, 'x2': 3, 'y': 3,
'x1': 0, 'x2': 4, 'y': 4,
'x1': 1, 'x2': 1, 'y': 4,
'x1': 1, 'x2': 2, 'y': 8,
'x1': 1, 'x2': 3, 'y': 12,
'x1': 1, 'x2': 4, 'y': 16
],
columns=['x1', 'x2', 'y'])

X = df[['x1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[0, 5]]))) # Output is 8.75
print(reg.predict(np.array([[1, 5]]))) # Output is 16.25

# use one hot encoder

df = pd.get_dummies(df, columns=["x1"], prefix=["x1"])
X = df[['x1_0', 'x1_1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[1, 0, 5]]))) # Output is 8.75
print(reg.predict(np.array([[0, 1, 5]]))) # Output is 16.25


How can I use pandas and sklearn for the combined data to get the same accuracy using one machine learning model?










share|improve this question











$endgroup$




bumped to the homepage by Community 6 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.














  • $begingroup$
    Welcome to datascience. This is one good link that may help you: scikit-learn.org/stable/tutorial/basic/tutorial.html
    $endgroup$
    – rnso
    Nov 23 '18 at 15:04










  • $begingroup$
    @rnso Thank you for the link. My issue is not about setting up a simple regression problem using scikit-learn. It is more to do with how to handle a variable like (x1) which qualitatively changes the trend of the data. In the example I gave, the ML algorithm must give slope = 1 when x1=0 and slope=4 when x1=1. Is that possible to do with a single ML algorithm or breaking up the data into two training sets is the only alternative?
    $endgroup$
    – user3631804
    Nov 23 '18 at 15:39










  • $begingroup$
    Probably you need mixed models as on: statsmodels.org/devel/mixed_linear.html
    $endgroup$
    – rnso
    Nov 23 '18 at 16:15










  • $begingroup$
    You should post some follow-up here. How did you solve your problem?
    $endgroup$
    – rnso
    Nov 24 '18 at 8:07











  • $begingroup$
    If x1 will have only 2 options then you can keep only one column (x1) for joint dataframe. The try to predict for (0,5) and (1,5). Post here the results.
    $endgroup$
    – rnso
    Nov 24 '18 at 10:45
















0












$begingroup$


I have data of the following kind:



 x1 x2 y
0 0 1 1
1 0 2 2
2 0 3 3
3 0 4 4
4 1 1 4
5 1 2 8
6 1 3 12
7 1 4 16


Is it possible to construct a single machine learning algorithm in python/scikit-learn by defining column x1 in such a way that a simple linear regression should give predict(x1=0, x2=5) = 5 and predict(x1=1, x2=5) = 20. My actual problem has multiple values of x1.



To illustrate the problem better: I have the following code with one hot encoder and it doesn't seem to give the accuracy of training the data separately.



import pandas as pd
from sklearn.linear_model import LinearRegression

# Dataframe with x1 = 0 and linear regression gives a slope of 1 as expected

df = pd.DataFrame(data=['x1': 0, 'x2': 1, 'y': 1,
'x1': 0, 'x2': 2, 'y': 2,
'x1': 0, 'x2': 3, 'y': 3,
'x1': 0, 'x2': 4, 'y': 4
],
columns=['x1', 'x2', 'y'])

X = df[['x1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[0, 5]]))) # Output is 5 as expected

# Dataframe with x1 = 1 and linear regression gives a slope of 5 as expected

df = pd.DataFrame(data=['x1': 1, 'x2': 1, 'y': 4,
'x1': 1, 'x2': 2, 'y': 8,
'x1': 1, 'x2': 3, 'y': 12,
'x1': 1, 'x2': 4, 'y': 16
],
columns=['x1', 'x2', 'y'])

X = df[['x1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[1, 5]]))) # Output is 20 as expected

# Combine the two data frames x1 = 0 and x1 = 1

df = pd.DataFrame(data=['x1': 0, 'x2': 1, 'y': 1,
'x1': 0, 'x2': 2, 'y': 2,
'x1': 0, 'x2': 3, 'y': 3,
'x1': 0, 'x2': 4, 'y': 4,
'x1': 1, 'x2': 1, 'y': 4,
'x1': 1, 'x2': 2, 'y': 8,
'x1': 1, 'x2': 3, 'y': 12,
'x1': 1, 'x2': 4, 'y': 16
],
columns=['x1', 'x2', 'y'])

X = df[['x1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[0, 5]]))) # Output is 8.75
print(reg.predict(np.array([[1, 5]]))) # Output is 16.25

# use one hot encoder

df = pd.get_dummies(df, columns=["x1"], prefix=["x1"])
X = df[['x1_0', 'x1_1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[1, 0, 5]]))) # Output is 8.75
print(reg.predict(np.array([[0, 1, 5]]))) # Output is 16.25


How can I use pandas and sklearn for the combined data to get the same accuracy using one machine learning model?










share|improve this question











$endgroup$




bumped to the homepage by Community 6 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.














  • $begingroup$
    Welcome to datascience. This is one good link that may help you: scikit-learn.org/stable/tutorial/basic/tutorial.html
    $endgroup$
    – rnso
    Nov 23 '18 at 15:04










  • $begingroup$
    @rnso Thank you for the link. My issue is not about setting up a simple regression problem using scikit-learn. It is more to do with how to handle a variable like (x1) which qualitatively changes the trend of the data. In the example I gave, the ML algorithm must give slope = 1 when x1=0 and slope=4 when x1=1. Is that possible to do with a single ML algorithm or breaking up the data into two training sets is the only alternative?
    $endgroup$
    – user3631804
    Nov 23 '18 at 15:39










  • $begingroup$
    Probably you need mixed models as on: statsmodels.org/devel/mixed_linear.html
    $endgroup$
    – rnso
    Nov 23 '18 at 16:15










  • $begingroup$
    You should post some follow-up here. How did you solve your problem?
    $endgroup$
    – rnso
    Nov 24 '18 at 8:07











  • $begingroup$
    If x1 will have only 2 options then you can keep only one column (x1) for joint dataframe. The try to predict for (0,5) and (1,5). Post here the results.
    $endgroup$
    – rnso
    Nov 24 '18 at 10:45














0












0








0





$begingroup$


I have data of the following kind:



 x1 x2 y
0 0 1 1
1 0 2 2
2 0 3 3
3 0 4 4
4 1 1 4
5 1 2 8
6 1 3 12
7 1 4 16


Is it possible to construct a single machine learning algorithm in python/scikit-learn by defining column x1 in such a way that a simple linear regression should give predict(x1=0, x2=5) = 5 and predict(x1=1, x2=5) = 20. My actual problem has multiple values of x1.



To illustrate the problem better: I have the following code with one hot encoder and it doesn't seem to give the accuracy of training the data separately.



import pandas as pd
from sklearn.linear_model import LinearRegression

# Dataframe with x1 = 0 and linear regression gives a slope of 1 as expected

df = pd.DataFrame(data=['x1': 0, 'x2': 1, 'y': 1,
'x1': 0, 'x2': 2, 'y': 2,
'x1': 0, 'x2': 3, 'y': 3,
'x1': 0, 'x2': 4, 'y': 4
],
columns=['x1', 'x2', 'y'])

X = df[['x1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[0, 5]]))) # Output is 5 as expected

# Dataframe with x1 = 1 and linear regression gives a slope of 5 as expected

df = pd.DataFrame(data=['x1': 1, 'x2': 1, 'y': 4,
'x1': 1, 'x2': 2, 'y': 8,
'x1': 1, 'x2': 3, 'y': 12,
'x1': 1, 'x2': 4, 'y': 16
],
columns=['x1', 'x2', 'y'])

X = df[['x1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[1, 5]]))) # Output is 20 as expected

# Combine the two data frames x1 = 0 and x1 = 1

df = pd.DataFrame(data=['x1': 0, 'x2': 1, 'y': 1,
'x1': 0, 'x2': 2, 'y': 2,
'x1': 0, 'x2': 3, 'y': 3,
'x1': 0, 'x2': 4, 'y': 4,
'x1': 1, 'x2': 1, 'y': 4,
'x1': 1, 'x2': 2, 'y': 8,
'x1': 1, 'x2': 3, 'y': 12,
'x1': 1, 'x2': 4, 'y': 16
],
columns=['x1', 'x2', 'y'])

X = df[['x1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[0, 5]]))) # Output is 8.75
print(reg.predict(np.array([[1, 5]]))) # Output is 16.25

# use one hot encoder

df = pd.get_dummies(df, columns=["x1"], prefix=["x1"])
X = df[['x1_0', 'x1_1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[1, 0, 5]]))) # Output is 8.75
print(reg.predict(np.array([[0, 1, 5]]))) # Output is 16.25


How can I use pandas and sklearn for the combined data to get the same accuracy using one machine learning model?










share|improve this question











$endgroup$




I have data of the following kind:



 x1 x2 y
0 0 1 1
1 0 2 2
2 0 3 3
3 0 4 4
4 1 1 4
5 1 2 8
6 1 3 12
7 1 4 16


Is it possible to construct a single machine learning algorithm in python/scikit-learn by defining column x1 in such a way that a simple linear regression should give predict(x1=0, x2=5) = 5 and predict(x1=1, x2=5) = 20. My actual problem has multiple values of x1.



To illustrate the problem better: I have the following code with one hot encoder and it doesn't seem to give the accuracy of training the data separately.



import pandas as pd
from sklearn.linear_model import LinearRegression

# Dataframe with x1 = 0 and linear regression gives a slope of 1 as expected

df = pd.DataFrame(data=['x1': 0, 'x2': 1, 'y': 1,
'x1': 0, 'x2': 2, 'y': 2,
'x1': 0, 'x2': 3, 'y': 3,
'x1': 0, 'x2': 4, 'y': 4
],
columns=['x1', 'x2', 'y'])

X = df[['x1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[0, 5]]))) # Output is 5 as expected

# Dataframe with x1 = 1 and linear regression gives a slope of 5 as expected

df = pd.DataFrame(data=['x1': 1, 'x2': 1, 'y': 4,
'x1': 1, 'x2': 2, 'y': 8,
'x1': 1, 'x2': 3, 'y': 12,
'x1': 1, 'x2': 4, 'y': 16
],
columns=['x1', 'x2', 'y'])

X = df[['x1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[1, 5]]))) # Output is 20 as expected

# Combine the two data frames x1 = 0 and x1 = 1

df = pd.DataFrame(data=['x1': 0, 'x2': 1, 'y': 1,
'x1': 0, 'x2': 2, 'y': 2,
'x1': 0, 'x2': 3, 'y': 3,
'x1': 0, 'x2': 4, 'y': 4,
'x1': 1, 'x2': 1, 'y': 4,
'x1': 1, 'x2': 2, 'y': 8,
'x1': 1, 'x2': 3, 'y': 12,
'x1': 1, 'x2': 4, 'y': 16
],
columns=['x1', 'x2', 'y'])

X = df[['x1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[0, 5]]))) # Output is 8.75
print(reg.predict(np.array([[1, 5]]))) # Output is 16.25

# use one hot encoder

df = pd.get_dummies(df, columns=["x1"], prefix=["x1"])
X = df[['x1_0', 'x1_1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[1, 0, 5]]))) # Output is 8.75
print(reg.predict(np.array([[0, 1, 5]]))) # Output is 16.25


How can I use pandas and sklearn for the combined data to get the same accuracy using one machine learning model?







machine-learning python scikit-learn






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 24 '18 at 11:37







user3631804

















asked Nov 23 '18 at 14:31









user3631804user3631804

11




11





bumped to the homepage by Community 6 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.







bumped to the homepage by Community 6 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.













  • $begingroup$
    Welcome to datascience. This is one good link that may help you: scikit-learn.org/stable/tutorial/basic/tutorial.html
    $endgroup$
    – rnso
    Nov 23 '18 at 15:04










  • $begingroup$
    @rnso Thank you for the link. My issue is not about setting up a simple regression problem using scikit-learn. It is more to do with how to handle a variable like (x1) which qualitatively changes the trend of the data. In the example I gave, the ML algorithm must give slope = 1 when x1=0 and slope=4 when x1=1. Is that possible to do with a single ML algorithm or breaking up the data into two training sets is the only alternative?
    $endgroup$
    – user3631804
    Nov 23 '18 at 15:39










  • $begingroup$
    Probably you need mixed models as on: statsmodels.org/devel/mixed_linear.html
    $endgroup$
    – rnso
    Nov 23 '18 at 16:15










  • $begingroup$
    You should post some follow-up here. How did you solve your problem?
    $endgroup$
    – rnso
    Nov 24 '18 at 8:07











  • $begingroup$
    If x1 will have only 2 options then you can keep only one column (x1) for joint dataframe. The try to predict for (0,5) and (1,5). Post here the results.
    $endgroup$
    – rnso
    Nov 24 '18 at 10:45

















  • $begingroup$
    Welcome to datascience. This is one good link that may help you: scikit-learn.org/stable/tutorial/basic/tutorial.html
    $endgroup$
    – rnso
    Nov 23 '18 at 15:04










  • $begingroup$
    @rnso Thank you for the link. My issue is not about setting up a simple regression problem using scikit-learn. It is more to do with how to handle a variable like (x1) which qualitatively changes the trend of the data. In the example I gave, the ML algorithm must give slope = 1 when x1=0 and slope=4 when x1=1. Is that possible to do with a single ML algorithm or breaking up the data into two training sets is the only alternative?
    $endgroup$
    – user3631804
    Nov 23 '18 at 15:39










  • $begingroup$
    Probably you need mixed models as on: statsmodels.org/devel/mixed_linear.html
    $endgroup$
    – rnso
    Nov 23 '18 at 16:15










  • $begingroup$
    You should post some follow-up here. How did you solve your problem?
    $endgroup$
    – rnso
    Nov 24 '18 at 8:07











  • $begingroup$
    If x1 will have only 2 options then you can keep only one column (x1) for joint dataframe. The try to predict for (0,5) and (1,5). Post here the results.
    $endgroup$
    – rnso
    Nov 24 '18 at 10:45
















$begingroup$
Welcome to datascience. This is one good link that may help you: scikit-learn.org/stable/tutorial/basic/tutorial.html
$endgroup$
– rnso
Nov 23 '18 at 15:04




$begingroup$
Welcome to datascience. This is one good link that may help you: scikit-learn.org/stable/tutorial/basic/tutorial.html
$endgroup$
– rnso
Nov 23 '18 at 15:04












$begingroup$
@rnso Thank you for the link. My issue is not about setting up a simple regression problem using scikit-learn. It is more to do with how to handle a variable like (x1) which qualitatively changes the trend of the data. In the example I gave, the ML algorithm must give slope = 1 when x1=0 and slope=4 when x1=1. Is that possible to do with a single ML algorithm or breaking up the data into two training sets is the only alternative?
$endgroup$
– user3631804
Nov 23 '18 at 15:39




$begingroup$
@rnso Thank you for the link. My issue is not about setting up a simple regression problem using scikit-learn. It is more to do with how to handle a variable like (x1) which qualitatively changes the trend of the data. In the example I gave, the ML algorithm must give slope = 1 when x1=0 and slope=4 when x1=1. Is that possible to do with a single ML algorithm or breaking up the data into two training sets is the only alternative?
$endgroup$
– user3631804
Nov 23 '18 at 15:39












$begingroup$
Probably you need mixed models as on: statsmodels.org/devel/mixed_linear.html
$endgroup$
– rnso
Nov 23 '18 at 16:15




$begingroup$
Probably you need mixed models as on: statsmodels.org/devel/mixed_linear.html
$endgroup$
– rnso
Nov 23 '18 at 16:15












$begingroup$
You should post some follow-up here. How did you solve your problem?
$endgroup$
– rnso
Nov 24 '18 at 8:07





$begingroup$
You should post some follow-up here. How did you solve your problem?
$endgroup$
– rnso
Nov 24 '18 at 8:07













$begingroup$
If x1 will have only 2 options then you can keep only one column (x1) for joint dataframe. The try to predict for (0,5) and (1,5). Post here the results.
$endgroup$
– rnso
Nov 24 '18 at 10:45





$begingroup$
If x1 will have only 2 options then you can keep only one column (x1) for joint dataframe. The try to predict for (0,5) and (1,5). Post here the results.
$endgroup$
– rnso
Nov 24 '18 at 10:45











1 Answer
1






active

oldest

votes


















0












$begingroup$

You can have x1 as a categorical variable, convert it to dummy variables (one hot encoder) and then run linear regression (or any other algorithm).






share|improve this answer









$endgroup$












  • $begingroup$
    Thank you. I used one hot encoder and that doesn't seem to give me the answer. I improved the question by providing pseudo-code. Can you please let me know if I did something wrong with the encoder?
    $endgroup$
    – user3631804
    Nov 24 '18 at 10:20











Your Answer








StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f41606%2fsingle-machine-learning-algorithm-for-multiple-classes-of-data-one-hot-encoder%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









0












$begingroup$

You can have x1 as a categorical variable, convert it to dummy variables (one hot encoder) and then run linear regression (or any other algorithm).






share|improve this answer









$endgroup$












  • $begingroup$
    Thank you. I used one hot encoder and that doesn't seem to give me the answer. I improved the question by providing pseudo-code. Can you please let me know if I did something wrong with the encoder?
    $endgroup$
    – user3631804
    Nov 24 '18 at 10:20















0












$begingroup$

You can have x1 as a categorical variable, convert it to dummy variables (one hot encoder) and then run linear regression (or any other algorithm).






share|improve this answer









$endgroup$












  • $begingroup$
    Thank you. I used one hot encoder and that doesn't seem to give me the answer. I improved the question by providing pseudo-code. Can you please let me know if I did something wrong with the encoder?
    $endgroup$
    – user3631804
    Nov 24 '18 at 10:20













0












0








0





$begingroup$

You can have x1 as a categorical variable, convert it to dummy variables (one hot encoder) and then run linear regression (or any other algorithm).






share|improve this answer









$endgroup$



You can have x1 as a categorical variable, convert it to dummy variables (one hot encoder) and then run linear regression (or any other algorithm).







share|improve this answer












share|improve this answer



share|improve this answer










answered Nov 23 '18 at 16:30









rnsornso

508317




508317











  • $begingroup$
    Thank you. I used one hot encoder and that doesn't seem to give me the answer. I improved the question by providing pseudo-code. Can you please let me know if I did something wrong with the encoder?
    $endgroup$
    – user3631804
    Nov 24 '18 at 10:20
















  • $begingroup$
    Thank you. I used one hot encoder and that doesn't seem to give me the answer. I improved the question by providing pseudo-code. Can you please let me know if I did something wrong with the encoder?
    $endgroup$
    – user3631804
    Nov 24 '18 at 10:20















$begingroup$
Thank you. I used one hot encoder and that doesn't seem to give me the answer. I improved the question by providing pseudo-code. Can you please let me know if I did something wrong with the encoder?
$endgroup$
– user3631804
Nov 24 '18 at 10:20




$begingroup$
Thank you. I used one hot encoder and that doesn't seem to give me the answer. I improved the question by providing pseudo-code. Can you please let me know if I did something wrong with the encoder?
$endgroup$
– user3631804
Nov 24 '18 at 10:20

















draft saved

draft discarded
















































Thanks for contributing an answer to Data Science Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f41606%2fsingle-machine-learning-algorithm-for-multiple-classes-of-data-one-hot-encoder%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Францішак Багушэвіч Змест Сям'я | Біяграфія | Творчасць | Мова Багушэвіча | Ацэнкі дзейнасці | Цікавыя факты | Спадчына | Выбраная бібліяграфія | Ушанаванне памяці | У філатэліі | Зноскі | Літаратура | Спасылкі | НавігацыяЛяхоўскі У. Рупіўся дзеля Бога і людзей: Жыццёвы шлях Лявона Вітан-Дубейкаўскага // Вольскі і Памідораў з песняй пра немца Адвакат, паэт, народны заступнік Ашмянскі веснікВ Минске появится площадь Богушевича и улица Сырокомли, Белорусская деловая газета, 19 июля 2001 г.Айцец беларускай нацыянальнай ідэі паўстаў у бронзе Сяргей Аляксандравіч Адашкевіч (1918, Мінск). 80-я гады. Бюст «Францішак Багушэвіч».Яўген Мікалаевіч Ціхановіч. «Партрэт Францішка Багушэвіча»Мікола Мікалаевіч Купава. «Партрэт зачынальніка новай беларускай літаратуры Францішка Багушэвіча»Уладзімір Іванавіч Мелехаў. На помніку «Змагарам за родную мову» Барэльеф «Францішак Багушэвіч»Памяць пра Багушэвіча на Віленшчыне Страчаная сталіца. Беларускія шыльды на вуліцах Вільні«Krynica». Ideologia i przywódcy białoruskiego katolicyzmuФранцішак БагушэвічТворы на knihi.comТворы Францішка Багушэвіча на bellib.byСодаль Уладзімір. Францішак Багушэвіч на Лідчыне;Луцкевіч Антон. Жыцьцё і творчасьць Фр. Багушэвіча ў успамінах ягоных сучасьнікаў // Запісы Беларускага Навуковага таварыства. Вільня, 1938. Сшытак 1. С. 16-34.Большая российская1188761710000 0000 5537 633Xn9209310021619551927869394п

Partai Komunis Tiongkok Daftar isi Kepemimpinan | Pranala luar | Referensi | Menu navigasidiperiksa1 perubahan tertundacpc.people.com.cnSitus resmiSurat kabar resmi"Why the Communist Party is alive, well and flourishing in China"0307-1235"Full text of Constitution of Communist Party of China"smengembangkannyas

ValueError: Expected n_neighbors <= n_samples, but n_samples = 1, n_neighbors = 6 (SMOTE) The 2019 Stack Overflow Developer Survey Results Are InCan SMOTE be applied over sequence of words (sentences)?ValueError when doing validation with random forestsSMOTE and multi class oversamplingLogic behind SMOTE-NC?ValueError: Error when checking target: expected dense_1 to have shape (7,) but got array with shape (1,)SmoteBoost: Should SMOTE be ran individually for each iteration/tree in the boosting?solving multi-class imbalance classification using smote and OSSUsing SMOTE for Synthetic Data generation to improve performance on unbalanced dataproblem of entry format for a simple model in KerasSVM SMOTE fit_resample() function runs forever with no result