Neural Net Accuracy: Test Set vs Real World Data Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern) 2019 Moderator Election Q&A - Questionnaire 2019 Community Moderator Election ResultsNeural Networks: How to prepare real world data to detect low probability events?Preparing custom dataset for object detection using MLHow to maximize recall?Any difference between adding epochs and duplicating data for neural nets?Why would 2 sets of similar training samples take significantly longer to train?Keras intuition/guidelines for setting epochs and batch sizeExpected behaviour of loss and accuracy when using data augmentationOwn Implementation of Neural Networks heavily under fitting the dataNeural Network Data Normalization SetupWhy real-world output of my classifier has similar label ratio to training data?
How to call a function with default parameter through a pointer to function that is the return of another function?
When a candle burns, why does the top of wick glow if bottom of flame is hottest?
Is it ethical to give a final exam after the professor has quit before teaching the remaining chapters of the course?
Identify plant with long narrow paired leaves and reddish stems
How much time will it take to get my passport back if I am applying for multiple Schengen visa countries?
Why was the term "discrete" used in discrete logarithm?
Can a non-EU citizen traveling with me come with me through the EU passport line?
What's the purpose of writing one's academic biography in the third person?
What causes the vertical darker bands in my photo?
ListPlot join points by nearest neighbor rather than order
Apollo command module space walk?
Is the Standard Deduction better than Itemized when both are the same amount?
Should I discuss the type of campaign with my players?
Okay to merge included columns on otherwise identical indexes?
Why do people hide their license plates in the EU?
Dating a Former Employee
Gordon Ramsay Pudding Recipe
What does the "x" in "x86" represent?
Why are Kinder Surprise Eggs illegal in the USA?
How come Sam didn't become Lord of Horn Hill?
What is the logic behind the Maharil's explanation of why we don't say שעשה ניסים on Pesach?
What is Arya's weapon design?
Can I cast Passwall to drop an enemy into a 20-foot pit?
When were vectors invented?
Neural Net Accuracy: Test Set vs Real World Data
Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)
2019 Moderator Election Q&A - Questionnaire
2019 Community Moderator Election ResultsNeural Networks: How to prepare real world data to detect low probability events?Preparing custom dataset for object detection using MLHow to maximize recall?Any difference between adding epochs and duplicating data for neural nets?Why would 2 sets of similar training samples take significantly longer to train?Keras intuition/guidelines for setting epochs and batch sizeExpected behaviour of loss and accuracy when using data augmentationOwn Implementation of Neural Networks heavily under fitting the dataNeural Network Data Normalization SetupWhy real-world output of my classifier has similar label ratio to training data?
$begingroup$
Neural Net accuracy is high on test set but low on new real world image examples.
Looking for advice regarding what generally causes this scenario and how to fix it.
Sampling basis? Training/test set is not representative of real world data? Obtain more training/test data?
neural-network deep-learning keras tensorflow image-classification
New contributor
$endgroup$
add a comment |
$begingroup$
Neural Net accuracy is high on test set but low on new real world image examples.
Looking for advice regarding what generally causes this scenario and how to fix it.
Sampling basis? Training/test set is not representative of real world data? Obtain more training/test data?
neural-network deep-learning keras tensorflow image-classification
New contributor
$endgroup$
1
$begingroup$
Probably your train and test set are not representative, but it is impossible to tell with no information. Can you tell the difference between train/test and real world data yourself? Can you show samples? How much data do you have? More relevant information will be helpful in getting you good answers.
$endgroup$
– Simon Larsson
Apr 12 at 7:08
$begingroup$
Train/test/real world data are of the same format/quality. ~10,000 positive training examples and ~10,000 negative training examples for binary classification.
$endgroup$
– joshvarial
Apr 13 at 2:00
add a comment |
$begingroup$
Neural Net accuracy is high on test set but low on new real world image examples.
Looking for advice regarding what generally causes this scenario and how to fix it.
Sampling basis? Training/test set is not representative of real world data? Obtain more training/test data?
neural-network deep-learning keras tensorflow image-classification
New contributor
$endgroup$
Neural Net accuracy is high on test set but low on new real world image examples.
Looking for advice regarding what generally causes this scenario and how to fix it.
Sampling basis? Training/test set is not representative of real world data? Obtain more training/test data?
neural-network deep-learning keras tensorflow image-classification
neural-network deep-learning keras tensorflow image-classification
New contributor
New contributor
edited 9 mins ago
joshvarial
New contributor
asked Apr 12 at 6:57
joshvarialjoshvarial
262
262
New contributor
New contributor
1
$begingroup$
Probably your train and test set are not representative, but it is impossible to tell with no information. Can you tell the difference between train/test and real world data yourself? Can you show samples? How much data do you have? More relevant information will be helpful in getting you good answers.
$endgroup$
– Simon Larsson
Apr 12 at 7:08
$begingroup$
Train/test/real world data are of the same format/quality. ~10,000 positive training examples and ~10,000 negative training examples for binary classification.
$endgroup$
– joshvarial
Apr 13 at 2:00
add a comment |
1
$begingroup$
Probably your train and test set are not representative, but it is impossible to tell with no information. Can you tell the difference between train/test and real world data yourself? Can you show samples? How much data do you have? More relevant information will be helpful in getting you good answers.
$endgroup$
– Simon Larsson
Apr 12 at 7:08
$begingroup$
Train/test/real world data are of the same format/quality. ~10,000 positive training examples and ~10,000 negative training examples for binary classification.
$endgroup$
– joshvarial
Apr 13 at 2:00
1
1
$begingroup$
Probably your train and test set are not representative, but it is impossible to tell with no information. Can you tell the difference between train/test and real world data yourself? Can you show samples? How much data do you have? More relevant information will be helpful in getting you good answers.
$endgroup$
– Simon Larsson
Apr 12 at 7:08
$begingroup$
Probably your train and test set are not representative, but it is impossible to tell with no information. Can you tell the difference between train/test and real world data yourself? Can you show samples? How much data do you have? More relevant information will be helpful in getting you good answers.
$endgroup$
– Simon Larsson
Apr 12 at 7:08
$begingroup$
Train/test/real world data are of the same format/quality. ~10,000 positive training examples and ~10,000 negative training examples for binary classification.
$endgroup$
– joshvarial
Apr 13 at 2:00
$begingroup$
Train/test/real world data are of the same format/quality. ~10,000 positive training examples and ~10,000 negative training examples for binary classification.
$endgroup$
– joshvarial
Apr 13 at 2:00
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
Many resources teach the process of splitting data into training, validation and test sets. This is what you want to do for "closed" datasets where it's not possible to get additional data.
This assumption of a closed dataset is often not true in the real world, where it may be feasible to collect more data. Statistically speaking, it is a lot more desirable to define a test set as a new data sample that was collected separately from your training data. This might be more representative of how the model will behave in production, but sometimes even this is not enough:
A few weeks back I built an image classifier for cars. I trained it using a mix of existing datasets and the results of a web scrape. Ultimately, it was deployed through an iOS app where it was supposed to do make predictions in real time. In this case, it was not enough to just create a test split or collect a new sample from the web. We needed to shoot our own images that were representative for the use case in order to make realistic assumptions about the app's performance.
New contributor
$endgroup$
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
joshvarial is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f49178%2fneural-net-accuracy-test-set-vs-real-world-data%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Many resources teach the process of splitting data into training, validation and test sets. This is what you want to do for "closed" datasets where it's not possible to get additional data.
This assumption of a closed dataset is often not true in the real world, where it may be feasible to collect more data. Statistically speaking, it is a lot more desirable to define a test set as a new data sample that was collected separately from your training data. This might be more representative of how the model will behave in production, but sometimes even this is not enough:
A few weeks back I built an image classifier for cars. I trained it using a mix of existing datasets and the results of a web scrape. Ultimately, it was deployed through an iOS app where it was supposed to do make predictions in real time. In this case, it was not enough to just create a test split or collect a new sample from the web. We needed to shoot our own images that were representative for the use case in order to make realistic assumptions about the app's performance.
New contributor
$endgroup$
add a comment |
$begingroup$
Many resources teach the process of splitting data into training, validation and test sets. This is what you want to do for "closed" datasets where it's not possible to get additional data.
This assumption of a closed dataset is often not true in the real world, where it may be feasible to collect more data. Statistically speaking, it is a lot more desirable to define a test set as a new data sample that was collected separately from your training data. This might be more representative of how the model will behave in production, but sometimes even this is not enough:
A few weeks back I built an image classifier for cars. I trained it using a mix of existing datasets and the results of a web scrape. Ultimately, it was deployed through an iOS app where it was supposed to do make predictions in real time. In this case, it was not enough to just create a test split or collect a new sample from the web. We needed to shoot our own images that were representative for the use case in order to make realistic assumptions about the app's performance.
New contributor
$endgroup$
add a comment |
$begingroup$
Many resources teach the process of splitting data into training, validation and test sets. This is what you want to do for "closed" datasets where it's not possible to get additional data.
This assumption of a closed dataset is often not true in the real world, where it may be feasible to collect more data. Statistically speaking, it is a lot more desirable to define a test set as a new data sample that was collected separately from your training data. This might be more representative of how the model will behave in production, but sometimes even this is not enough:
A few weeks back I built an image classifier for cars. I trained it using a mix of existing datasets and the results of a web scrape. Ultimately, it was deployed through an iOS app where it was supposed to do make predictions in real time. In this case, it was not enough to just create a test split or collect a new sample from the web. We needed to shoot our own images that were representative for the use case in order to make realistic assumptions about the app's performance.
New contributor
$endgroup$
Many resources teach the process of splitting data into training, validation and test sets. This is what you want to do for "closed" datasets where it's not possible to get additional data.
This assumption of a closed dataset is often not true in the real world, where it may be feasible to collect more data. Statistically speaking, it is a lot more desirable to define a test set as a new data sample that was collected separately from your training data. This might be more representative of how the model will behave in production, but sometimes even this is not enough:
A few weeks back I built an image classifier for cars. I trained it using a mix of existing datasets and the results of a web scrape. Ultimately, it was deployed through an iOS app where it was supposed to do make predictions in real time. In this case, it was not enough to just create a test split or collect a new sample from the web. We needed to shoot our own images that were representative for the use case in order to make realistic assumptions about the app's performance.
New contributor
New contributor
answered Apr 12 at 10:22
pietzpietz
1112
1112
New contributor
New contributor
add a comment |
add a comment |
joshvarial is a new contributor. Be nice, and check out our Code of Conduct.
joshvarial is a new contributor. Be nice, and check out our Code of Conduct.
joshvarial is a new contributor. Be nice, and check out our Code of Conduct.
joshvarial is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f49178%2fneural-net-accuracy-test-set-vs-real-world-data%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
$begingroup$
Probably your train and test set are not representative, but it is impossible to tell with no information. Can you tell the difference between train/test and real world data yourself? Can you show samples? How much data do you have? More relevant information will be helpful in getting you good answers.
$endgroup$
– Simon Larsson
Apr 12 at 7:08
$begingroup$
Train/test/real world data are of the same format/quality. ~10,000 positive training examples and ~10,000 negative training examples for binary classification.
$endgroup$
– joshvarial
Apr 13 at 2:00