Keras Binary Classification val_acc won't go past ~67; Full data and code included Unicorn Meta Zoo #1: Why another podcast? Announcing the arrival of Valued Associate #679: Cesar Manara 2019 Moderator Election Q&A - Questionnaire 2019 Community Moderator Election ResultsExtracting the code from KerasDeep Learning: Feed Forward for Unbalanced Classes Using Tensor FlowBinary text classification problem with small label-dataset using kerasBinary classification of every time series step based on past and future valuesmodeling binary classification dataSimple prediction with KerasValueError: Error when checking target: expected dense_2 to have shape (1,) but got array with shape (0,)keras model only predicts one class for all the test imagesBinary Classification of Numeric Sequences with Keras and LSTMsIs this a data issue, or a model issue? A Keras binary classification model
Was there ever a LEGO store in Miami International Airport?
How to translate "red flag" into Spanish?
Did war bonds have better investment alternatives during WWII?
What do you call an IPA symbol that lacks a name (e.g. ɲ)?
Why doesn't the university give past final exams' answers?
In search of the origins of term censor, I hit a dead end stuck with the greek term, to censor, λογοκρίνω
Determinant of a matrix with 2 equal rows
Are there existing rules/lore for MTG planeswalkers?
What is ls Largest Number Formed by only moving two sticks in 508?
What to do with someone that cheated their way though university and a PhD program?
Coin Game with infinite paradox
Why I cannot instantiate a class whose constructor is private in a friend class?
/bin/ls sorts differently than just ls
Why aren't road bicycle wheels tiny?
When speaking, how do you change your mind mid-sentence?
Why is arima in R one time step off?
What is the evidence that custom checks in Northern Ireland are going to result in violence?
Was Objective-C really a hindrance to Apple software development?
Does a Draconic Bloodline sorcerer's doubled proficiency bonus for Charisma checks against dragons apply to all dragon types or only the chosen one?
Simulate round-robin tournament draw
What's parked in Mil Moscow helicopter plant?
Is Bran literally the world's memory?
Why does the Cisco show run command not show the full version, while the show version command does?
Co-worker works way more than he should
Keras Binary Classification val_acc won't go past ~67; Full data and code included
Unicorn Meta Zoo #1: Why another podcast?
Announcing the arrival of Valued Associate #679: Cesar Manara
2019 Moderator Election Q&A - Questionnaire
2019 Community Moderator Election ResultsExtracting the code from KerasDeep Learning: Feed Forward for Unbalanced Classes Using Tensor FlowBinary text classification problem with small label-dataset using kerasBinary classification of every time series step based on past and future valuesmodeling binary classification dataSimple prediction with KerasValueError: Error when checking target: expected dense_2 to have shape (1,) but got array with shape (0,)keras model only predicts one class for all the test imagesBinary Classification of Numeric Sequences with Keras and LSTMsIs this a data issue, or a model issue? A Keras binary classification model
$begingroup$
I'm working on a binary classification in Keras with a Tensorflow backend. No matter how much I tweak, I can't seem to get my model past a val_acc of 67%. Is there something I'm missing, or is this just simply as accurate as I can get with my data?
Link to the data I am using
My Code
Load and Balance dataset to a 1:1, and create validation data.
from sklearn.utils import resample
raw_data = pd.read_csv('Data.csv')
df_majority = raw_data[raw_data['RESULT']==0].iloc[1:-2,0:3].dropna()
df_minority = raw_data[raw_data['RESULT']==1].iloc[1:-2,0:3].dropna()
print(raw_data['RESULT'].value_counts())
df_majority_downsampled = resample(df_majority,
replace=False,
n_samples=raw_data['RESULT'].value_counts()[1],
random_state=123)
# Combine minority class with downsampled majority class
df_downsampled = pd.concat([df_majority_downsampled,df_minority])
# Display new class counts
print(df_downsampled['RESULT'].value_counts())
print(numpy.unique(df_downsampled['RESULT']))
X = df_downsampled.iloc[1:-2,0:2].dropna()
Y = df_downsampled.iloc[1:-2,2:3].dropna()
X, XTest, Y, YTest = train_test_split(X, Y, test_size = 0.3, random_state = 0)
print(YTest['RESULT'].value_counts()) #Just a double check to make
Create Model
def create_model(activation):
model = Sequential()
model.add(Dense(128,activation=activation,input_dim=2))
model.add(BatchNormalization())
model.add(Dense(64,activation=activation))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(32,activation=activation))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(16,activation=activation))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(8,activation=activation))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(4,activation=activation))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(2,activation=activation))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(1,activation='sigmoid'))
# load weights
model.load_weights("weights.best.hdf5")
model.compile(loss='binary_crossentropy', optimizer=Adam(lr=0.0001),metrics=['accuracy'])
return model
model = create_model('relu')
filepath ="weights.best.hdf5"
checkpoint = ModelCheckpoint(filepath,monitor='val_acc',verbose=1,save_best_only=True,mode='max')
callbacks_list = [checkpoint]
history = model.fit(X,Y,epochs=2000,batch_size=32, shuffle = True,validation_data = (XTest,YTest), verbose = 0,
callbacks=callbacks_list)
Predict and get score
from sklearn.metrics import roc_auc_score
predict = model.predict_classes(X)
print(numpy.unique(predict))
#for index,val in enumerate(predict):
#print("Predicted: %s, actual: %s, for val %s"(val[0],Y.iloc[index].values,X.iloc[index].values))
predict = [val[0] for val in predict]
print("ras score: ",roc_auc_score(Y,predict))
predict = model.predict(numpy.array([0.0235,0.5]).reshape(-1,2))
print(predict[0][0])
Result using current model
machine-learning deep-learning keras dataset data-cleaning
$endgroup$
add a comment |
$begingroup$
I'm working on a binary classification in Keras with a Tensorflow backend. No matter how much I tweak, I can't seem to get my model past a val_acc of 67%. Is there something I'm missing, or is this just simply as accurate as I can get with my data?
Link to the data I am using
My Code
Load and Balance dataset to a 1:1, and create validation data.
from sklearn.utils import resample
raw_data = pd.read_csv('Data.csv')
df_majority = raw_data[raw_data['RESULT']==0].iloc[1:-2,0:3].dropna()
df_minority = raw_data[raw_data['RESULT']==1].iloc[1:-2,0:3].dropna()
print(raw_data['RESULT'].value_counts())
df_majority_downsampled = resample(df_majority,
replace=False,
n_samples=raw_data['RESULT'].value_counts()[1],
random_state=123)
# Combine minority class with downsampled majority class
df_downsampled = pd.concat([df_majority_downsampled,df_minority])
# Display new class counts
print(df_downsampled['RESULT'].value_counts())
print(numpy.unique(df_downsampled['RESULT']))
X = df_downsampled.iloc[1:-2,0:2].dropna()
Y = df_downsampled.iloc[1:-2,2:3].dropna()
X, XTest, Y, YTest = train_test_split(X, Y, test_size = 0.3, random_state = 0)
print(YTest['RESULT'].value_counts()) #Just a double check to make
Create Model
def create_model(activation):
model = Sequential()
model.add(Dense(128,activation=activation,input_dim=2))
model.add(BatchNormalization())
model.add(Dense(64,activation=activation))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(32,activation=activation))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(16,activation=activation))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(8,activation=activation))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(4,activation=activation))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(2,activation=activation))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(1,activation='sigmoid'))
# load weights
model.load_weights("weights.best.hdf5")
model.compile(loss='binary_crossentropy', optimizer=Adam(lr=0.0001),metrics=['accuracy'])
return model
model = create_model('relu')
filepath ="weights.best.hdf5"
checkpoint = ModelCheckpoint(filepath,monitor='val_acc',verbose=1,save_best_only=True,mode='max')
callbacks_list = [checkpoint]
history = model.fit(X,Y,epochs=2000,batch_size=32, shuffle = True,validation_data = (XTest,YTest), verbose = 0,
callbacks=callbacks_list)
Predict and get score
from sklearn.metrics import roc_auc_score
predict = model.predict_classes(X)
print(numpy.unique(predict))
#for index,val in enumerate(predict):
#print("Predicted: %s, actual: %s, for val %s"(val[0],Y.iloc[index].values,X.iloc[index].values))
predict = [val[0] for val in predict]
print("ras score: ",roc_auc_score(Y,predict))
predict = model.predict(numpy.array([0.0235,0.5]).reshape(-1,2))
print(predict[0][0])
Result using current model
machine-learning deep-learning keras dataset data-cleaning
$endgroup$
add a comment |
$begingroup$
I'm working on a binary classification in Keras with a Tensorflow backend. No matter how much I tweak, I can't seem to get my model past a val_acc of 67%. Is there something I'm missing, or is this just simply as accurate as I can get with my data?
Link to the data I am using
My Code
Load and Balance dataset to a 1:1, and create validation data.
from sklearn.utils import resample
raw_data = pd.read_csv('Data.csv')
df_majority = raw_data[raw_data['RESULT']==0].iloc[1:-2,0:3].dropna()
df_minority = raw_data[raw_data['RESULT']==1].iloc[1:-2,0:3].dropna()
print(raw_data['RESULT'].value_counts())
df_majority_downsampled = resample(df_majority,
replace=False,
n_samples=raw_data['RESULT'].value_counts()[1],
random_state=123)
# Combine minority class with downsampled majority class
df_downsampled = pd.concat([df_majority_downsampled,df_minority])
# Display new class counts
print(df_downsampled['RESULT'].value_counts())
print(numpy.unique(df_downsampled['RESULT']))
X = df_downsampled.iloc[1:-2,0:2].dropna()
Y = df_downsampled.iloc[1:-2,2:3].dropna()
X, XTest, Y, YTest = train_test_split(X, Y, test_size = 0.3, random_state = 0)
print(YTest['RESULT'].value_counts()) #Just a double check to make
Create Model
def create_model(activation):
model = Sequential()
model.add(Dense(128,activation=activation,input_dim=2))
model.add(BatchNormalization())
model.add(Dense(64,activation=activation))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(32,activation=activation))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(16,activation=activation))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(8,activation=activation))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(4,activation=activation))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(2,activation=activation))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(1,activation='sigmoid'))
# load weights
model.load_weights("weights.best.hdf5")
model.compile(loss='binary_crossentropy', optimizer=Adam(lr=0.0001),metrics=['accuracy'])
return model
model = create_model('relu')
filepath ="weights.best.hdf5"
checkpoint = ModelCheckpoint(filepath,monitor='val_acc',verbose=1,save_best_only=True,mode='max')
callbacks_list = [checkpoint]
history = model.fit(X,Y,epochs=2000,batch_size=32, shuffle = True,validation_data = (XTest,YTest), verbose = 0,
callbacks=callbacks_list)
Predict and get score
from sklearn.metrics import roc_auc_score
predict = model.predict_classes(X)
print(numpy.unique(predict))
#for index,val in enumerate(predict):
#print("Predicted: %s, actual: %s, for val %s"(val[0],Y.iloc[index].values,X.iloc[index].values))
predict = [val[0] for val in predict]
print("ras score: ",roc_auc_score(Y,predict))
predict = model.predict(numpy.array([0.0235,0.5]).reshape(-1,2))
print(predict[0][0])
Result using current model
machine-learning deep-learning keras dataset data-cleaning
$endgroup$
I'm working on a binary classification in Keras with a Tensorflow backend. No matter how much I tweak, I can't seem to get my model past a val_acc of 67%. Is there something I'm missing, or is this just simply as accurate as I can get with my data?
Link to the data I am using
My Code
Load and Balance dataset to a 1:1, and create validation data.
from sklearn.utils import resample
raw_data = pd.read_csv('Data.csv')
df_majority = raw_data[raw_data['RESULT']==0].iloc[1:-2,0:3].dropna()
df_minority = raw_data[raw_data['RESULT']==1].iloc[1:-2,0:3].dropna()
print(raw_data['RESULT'].value_counts())
df_majority_downsampled = resample(df_majority,
replace=False,
n_samples=raw_data['RESULT'].value_counts()[1],
random_state=123)
# Combine minority class with downsampled majority class
df_downsampled = pd.concat([df_majority_downsampled,df_minority])
# Display new class counts
print(df_downsampled['RESULT'].value_counts())
print(numpy.unique(df_downsampled['RESULT']))
X = df_downsampled.iloc[1:-2,0:2].dropna()
Y = df_downsampled.iloc[1:-2,2:3].dropna()
X, XTest, Y, YTest = train_test_split(X, Y, test_size = 0.3, random_state = 0)
print(YTest['RESULT'].value_counts()) #Just a double check to make
Create Model
def create_model(activation):
model = Sequential()
model.add(Dense(128,activation=activation,input_dim=2))
model.add(BatchNormalization())
model.add(Dense(64,activation=activation))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(32,activation=activation))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(16,activation=activation))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(8,activation=activation))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(4,activation=activation))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(2,activation=activation))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(1,activation='sigmoid'))
# load weights
model.load_weights("weights.best.hdf5")
model.compile(loss='binary_crossentropy', optimizer=Adam(lr=0.0001),metrics=['accuracy'])
return model
model = create_model('relu')
filepath ="weights.best.hdf5"
checkpoint = ModelCheckpoint(filepath,monitor='val_acc',verbose=1,save_best_only=True,mode='max')
callbacks_list = [checkpoint]
history = model.fit(X,Y,epochs=2000,batch_size=32, shuffle = True,validation_data = (XTest,YTest), verbose = 0,
callbacks=callbacks_list)
Predict and get score
from sklearn.metrics import roc_auc_score
predict = model.predict_classes(X)
print(numpy.unique(predict))
#for index,val in enumerate(predict):
#print("Predicted: %s, actual: %s, for val %s"(val[0],Y.iloc[index].values,X.iloc[index].values))
predict = [val[0] for val in predict]
print("ras score: ",roc_auc_score(Y,predict))
predict = model.predict(numpy.array([0.0235,0.5]).reshape(-1,2))
print(predict[0][0])
Result using current model
machine-learning deep-learning keras dataset data-cleaning
machine-learning deep-learning keras dataset data-cleaning
asked 12 mins ago
PavlovsCatPavlovsCat
82
82
add a comment |
add a comment |
0
active
oldest
votes
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f50825%2fkeras-binary-classification-val-acc-wont-go-past-67-full-data-and-code-includ%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
0
active
oldest
votes
0
active
oldest
votes
active
oldest
votes
active
oldest
votes
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f50825%2fkeras-binary-classification-val-acc-wont-go-past-67-full-data-and-code-includ%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown