SVM SMOTE fit_resample() function runs forever with no result2019 Community Moderator ElectionSVM using scikit learn runs endlessly and never completes executionPossible Reason for low Test accuracy and high AUCCan SMOTE be applied over sequence of words (sentences)?SMOTE and multi class oversamplingLogic behind SMOTE-NC?How to avoid resampling part of pipeline on test data (imblearn package, SMOTE)Error with functionSmoteBoost: Should SMOTE be ran individually for each iteration/tree in the boosting?solving multi-class imbalance classification using smote and OSSUsing SMOTE for Synthetic Data generation to improve performance on unbalanced data
Can I make popcorn with any corn?
dbcc cleantable batch size explanation
Convert two switches to a dual stack, and add outlet - possible here?
How do I deal with an unproductive colleague in a small company?
Client team has low performances and low technical skills: we always fix their work and now they stop collaborate with us. How to solve?
How can I prevent hyper evolved versions of regular creatures from wiping out their cousins?
How to format long polynomial?
Rock identification in KY
If human space travel is limited by the G force vulnerability, is there a way to counter G forces?
Are the number of citations and number of published articles the most important criteria for a tenure promotion?
What's the point of deactivating Num Lock on login screens?
Is it legal for company to use my work email to pretend I still work there?
Is it possible to do 50 km distance without any previous training?
Maximum likelihood parameters deviate from posterior distributions
What typically incentivizes a professor to change jobs to a lower ranking university?
Alternative to sending password over mail?
How is it possible to have an ability score that is less than 3?
Add text to same line using sed
Can you really stack all of this on an Opportunity Attack?
What is a clear way to write a bar that has an extra beat?
LWC SFDX source push error TypeError: LWC1009: decl.moveTo is not a function
Doing something right before you need it - expression for this?
What is the word for reserving something for yourself before others do?
What does the "remote control" for a QF-4 look like?
SVM SMOTE fit_resample() function runs forever with no result
2019 Community Moderator ElectionSVM using scikit learn runs endlessly and never completes executionPossible Reason for low Test accuracy and high AUCCan SMOTE be applied over sequence of words (sentences)?SMOTE and multi class oversamplingLogic behind SMOTE-NC?How to avoid resampling part of pipeline on test data (imblearn package, SMOTE)Error with functionSmoteBoost: Should SMOTE be ran individually for each iteration/tree in the boosting?solving multi-class imbalance classification using smote and OSSUsing SMOTE for Synthetic Data generation to improve performance on unbalanced data
$begingroup$
Problem
fit_resample(X,y)
is taking too long to complete execution for 2million rows.
Dataset specifications
I have a labeled dataset about network features, where the X
(features) and Y
(labels) are of shape (2M, 24)
and (2M,11)
respectively.
i.e. there are over 2million rows in the dataset. there are 24 features, and 11 different classes/labels.
Both X and Y are numpy
arrays of float
dtype.
Motivation for using SVM SMOTE
Due to class imbalance, I realized SVM SMOTE
is a good technique balance it, thereby, giving better classification.
Testing with smaller sub-datasets
To test the performance of my classifier, I started small. I made small subdatasets out of the big 2million row dataset.
It took the following code:-
%%time
sm = SVMSMOTE(random_state=42)
X_res, Y_res = sm.fit_resample(X, Y)
1st dataset contains only 7.5k rows. It took about 800ms to run the cell.
2nd dataset contains 115k rows. It took 20min to execute the cell.
Solution Attempts
My system crashes after running continuously for more than 48hrs, running out of memory.
I've tried some ideas, such as
1. splitting it to run on multiple CPU cores using %%px
No improvement in quicker execution
2. using NVIDIA GPU's
Same as above. Which is more understandable since the _smote.py
library functions aren't built with parallel programming for CUDA in mind.
I'm pretty frustrated by the lack of results, and a warm PC. What should I do?
python preprocessing numpy sampling smote
$endgroup$
add a comment |
$begingroup$
Problem
fit_resample(X,y)
is taking too long to complete execution for 2million rows.
Dataset specifications
I have a labeled dataset about network features, where the X
(features) and Y
(labels) are of shape (2M, 24)
and (2M,11)
respectively.
i.e. there are over 2million rows in the dataset. there are 24 features, and 11 different classes/labels.
Both X and Y are numpy
arrays of float
dtype.
Motivation for using SVM SMOTE
Due to class imbalance, I realized SVM SMOTE
is a good technique balance it, thereby, giving better classification.
Testing with smaller sub-datasets
To test the performance of my classifier, I started small. I made small subdatasets out of the big 2million row dataset.
It took the following code:-
%%time
sm = SVMSMOTE(random_state=42)
X_res, Y_res = sm.fit_resample(X, Y)
1st dataset contains only 7.5k rows. It took about 800ms to run the cell.
2nd dataset contains 115k rows. It took 20min to execute the cell.
Solution Attempts
My system crashes after running continuously for more than 48hrs, running out of memory.
I've tried some ideas, such as
1. splitting it to run on multiple CPU cores using %%px
No improvement in quicker execution
2. using NVIDIA GPU's
Same as above. Which is more understandable since the _smote.py
library functions aren't built with parallel programming for CUDA in mind.
I'm pretty frustrated by the lack of results, and a warm PC. What should I do?
python preprocessing numpy sampling smote
$endgroup$
$begingroup$
Try a linear SVM. It’s less complex. Also reducing your data set will help
$endgroup$
– Jon
2 hours ago
add a comment |
$begingroup$
Problem
fit_resample(X,y)
is taking too long to complete execution for 2million rows.
Dataset specifications
I have a labeled dataset about network features, where the X
(features) and Y
(labels) are of shape (2M, 24)
and (2M,11)
respectively.
i.e. there are over 2million rows in the dataset. there are 24 features, and 11 different classes/labels.
Both X and Y are numpy
arrays of float
dtype.
Motivation for using SVM SMOTE
Due to class imbalance, I realized SVM SMOTE
is a good technique balance it, thereby, giving better classification.
Testing with smaller sub-datasets
To test the performance of my classifier, I started small. I made small subdatasets out of the big 2million row dataset.
It took the following code:-
%%time
sm = SVMSMOTE(random_state=42)
X_res, Y_res = sm.fit_resample(X, Y)
1st dataset contains only 7.5k rows. It took about 800ms to run the cell.
2nd dataset contains 115k rows. It took 20min to execute the cell.
Solution Attempts
My system crashes after running continuously for more than 48hrs, running out of memory.
I've tried some ideas, such as
1. splitting it to run on multiple CPU cores using %%px
No improvement in quicker execution
2. using NVIDIA GPU's
Same as above. Which is more understandable since the _smote.py
library functions aren't built with parallel programming for CUDA in mind.
I'm pretty frustrated by the lack of results, and a warm PC. What should I do?
python preprocessing numpy sampling smote
$endgroup$
Problem
fit_resample(X,y)
is taking too long to complete execution for 2million rows.
Dataset specifications
I have a labeled dataset about network features, where the X
(features) and Y
(labels) are of shape (2M, 24)
and (2M,11)
respectively.
i.e. there are over 2million rows in the dataset. there are 24 features, and 11 different classes/labels.
Both X and Y are numpy
arrays of float
dtype.
Motivation for using SVM SMOTE
Due to class imbalance, I realized SVM SMOTE
is a good technique balance it, thereby, giving better classification.
Testing with smaller sub-datasets
To test the performance of my classifier, I started small. I made small subdatasets out of the big 2million row dataset.
It took the following code:-
%%time
sm = SVMSMOTE(random_state=42)
X_res, Y_res = sm.fit_resample(X, Y)
1st dataset contains only 7.5k rows. It took about 800ms to run the cell.
2nd dataset contains 115k rows. It took 20min to execute the cell.
Solution Attempts
My system crashes after running continuously for more than 48hrs, running out of memory.
I've tried some ideas, such as
1. splitting it to run on multiple CPU cores using %%px
No improvement in quicker execution
2. using NVIDIA GPU's
Same as above. Which is more understandable since the _smote.py
library functions aren't built with parallel programming for CUDA in mind.
I'm pretty frustrated by the lack of results, and a warm PC. What should I do?
python preprocessing numpy sampling smote
python preprocessing numpy sampling smote
asked 7 hours ago
venom8914venom8914
1312
1312
$begingroup$
Try a linear SVM. It’s less complex. Also reducing your data set will help
$endgroup$
– Jon
2 hours ago
add a comment |
$begingroup$
Try a linear SVM. It’s less complex. Also reducing your data set will help
$endgroup$
– Jon
2 hours ago
$begingroup$
Try a linear SVM. It’s less complex. Also reducing your data set will help
$endgroup$
– Jon
2 hours ago
$begingroup$
Try a linear SVM. It’s less complex. Also reducing your data set will help
$endgroup$
– Jon
2 hours ago
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
This is expected and is not related to SMOTE sampling.
The computational complexity of non-linear SVM is on the order of $O(n^2)$ to $O(n^3)$ where $n$ is the number of samples. This means that if it takes 0.8 seconds for 7.5K data points, it should take [3, 48] minutes for 115K, $$[(115/7.5)^2 times 0.8, (115/7.5)^3 times 0.8]s=[3,48]m,$$and from 16 hours to 175 days, 11 days for $O(n^2.5)$, for 2M data points.
You should continue using sample sizes on the order of 100K or less. Also, it is fruitful to track the accuracy (or any other score) as a function of samples for 1K, 10K, 50K, and 100K samples. It is possible that SVM performance plateaus before going beyond 100K, therefore, there is not much to lose by limiting the samples to 100K or less.
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48709%2fsvm-smote-fit-resample-function-runs-forever-with-no-result%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
This is expected and is not related to SMOTE sampling.
The computational complexity of non-linear SVM is on the order of $O(n^2)$ to $O(n^3)$ where $n$ is the number of samples. This means that if it takes 0.8 seconds for 7.5K data points, it should take [3, 48] minutes for 115K, $$[(115/7.5)^2 times 0.8, (115/7.5)^3 times 0.8]s=[3,48]m,$$and from 16 hours to 175 days, 11 days for $O(n^2.5)$, for 2M data points.
You should continue using sample sizes on the order of 100K or less. Also, it is fruitful to track the accuracy (or any other score) as a function of samples for 1K, 10K, 50K, and 100K samples. It is possible that SVM performance plateaus before going beyond 100K, therefore, there is not much to lose by limiting the samples to 100K or less.
$endgroup$
add a comment |
$begingroup$
This is expected and is not related to SMOTE sampling.
The computational complexity of non-linear SVM is on the order of $O(n^2)$ to $O(n^3)$ where $n$ is the number of samples. This means that if it takes 0.8 seconds for 7.5K data points, it should take [3, 48] minutes for 115K, $$[(115/7.5)^2 times 0.8, (115/7.5)^3 times 0.8]s=[3,48]m,$$and from 16 hours to 175 days, 11 days for $O(n^2.5)$, for 2M data points.
You should continue using sample sizes on the order of 100K or less. Also, it is fruitful to track the accuracy (or any other score) as a function of samples for 1K, 10K, 50K, and 100K samples. It is possible that SVM performance plateaus before going beyond 100K, therefore, there is not much to lose by limiting the samples to 100K or less.
$endgroup$
add a comment |
$begingroup$
This is expected and is not related to SMOTE sampling.
The computational complexity of non-linear SVM is on the order of $O(n^2)$ to $O(n^3)$ where $n$ is the number of samples. This means that if it takes 0.8 seconds for 7.5K data points, it should take [3, 48] minutes for 115K, $$[(115/7.5)^2 times 0.8, (115/7.5)^3 times 0.8]s=[3,48]m,$$and from 16 hours to 175 days, 11 days for $O(n^2.5)$, for 2M data points.
You should continue using sample sizes on the order of 100K or less. Also, it is fruitful to track the accuracy (or any other score) as a function of samples for 1K, 10K, 50K, and 100K samples. It is possible that SVM performance plateaus before going beyond 100K, therefore, there is not much to lose by limiting the samples to 100K or less.
$endgroup$
This is expected and is not related to SMOTE sampling.
The computational complexity of non-linear SVM is on the order of $O(n^2)$ to $O(n^3)$ where $n$ is the number of samples. This means that if it takes 0.8 seconds for 7.5K data points, it should take [3, 48] minutes for 115K, $$[(115/7.5)^2 times 0.8, (115/7.5)^3 times 0.8]s=[3,48]m,$$and from 16 hours to 175 days, 11 days for $O(n^2.5)$, for 2M data points.
You should continue using sample sizes on the order of 100K or less. Also, it is fruitful to track the accuracy (or any other score) as a function of samples for 1K, 10K, 50K, and 100K samples. It is possible that SVM performance plateaus before going beyond 100K, therefore, there is not much to lose by limiting the samples to 100K or less.
edited 6 hours ago
answered 7 hours ago
EsmailianEsmailian
2,631318
2,631318
add a comment |
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48709%2fsvm-smote-fit-resample-function-runs-forever-with-no-result%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
Try a linear SVM. It’s less complex. Also reducing your data set will help
$endgroup$
– Jon
2 hours ago