Knowing Feature Importance from Sparse Matrix The Next CEO of Stack Overflow2019 Community Moderator ElectionFeature selection using feature importances in random forests with scikit-learnFeature Importance and Partial Dependence plots seem to disagree?XGBoost Feature importance - Gain and Cover are high but Frequency is lowGridsearch XGBoost for ensemble. Do i include first-level prediction matrix of base learners in train set?XGBoost: Quantifying Feature ImportancesHow to combine scipy sparse csr matrix to pandas dataframe. | Combining text feature with numerical featuresFeature matrix for email classification:Dealing with correlated features when calculating permutation importance
Math-accent symbol over parentheses enclosing accented symbol (amsmath)
The past simple of "gaslight" – "gaslighted" or "gaslit"?
I believe this to be a fraud - hired, then asked to cash check and send cash as Bitcoin
Is French Guiana a (hard) EU border?
The exact meaning of 'Mom made me a sandwich'
Easy to read palindrome checker
Which one is the true statement?
Dominated convergence theorem - what sequence?
What does "Its cash flow is deeply negative" mean?
Is a distribution that is normal, but highly skewed considered Gaussian?
Find non-case sensitive string in a mixed list of elements?
What connection does MS Office have to Netscape Navigator?
How do I align (1) and (2)?
When you upcast Blindness/Deafness, do all targets suffer the same effect?
0 rank tensor vs 1D vector
Chain wire methods together in Lightning Web Components
Is it possible to replace duplicates of a character with one character using tr
Does soap repel water?
Why didn't Khan get resurrected in the Genesis Explosion?
Example of a Mathematician/Physicist whose Other Publications during their PhD eclipsed their PhD Thesis
A Man With a Stainless Steel Endoskeleton (like The Terminator) Fighting Cloaked Aliens Only He Can See
WOW air has ceased operation, can I get my tickets refunded?
How many extra stops do monopods offer for tele photographs?
What was the first Unix version to run on a microcomputer?
Knowing Feature Importance from Sparse Matrix
The Next CEO of Stack Overflow2019 Community Moderator ElectionFeature selection using feature importances in random forests with scikit-learnFeature Importance and Partial Dependence plots seem to disagree?XGBoost Feature importance - Gain and Cover are high but Frequency is lowGridsearch XGBoost for ensemble. Do i include first-level prediction matrix of base learners in train set?XGBoost: Quantifying Feature ImportancesHow to combine scipy sparse csr matrix to pandas dataframe. | Combining text feature with numerical featuresFeature matrix for email classification:Dealing with correlated features when calculating permutation importance
$begingroup$
I was working with a dataset which had a textual column as well as numerical columns, so I used tfidf for textual column and created a sparse matrix, similarly for the numerical features I created a sparse matrix using scipy.sparse.csr_matrix
and combined them with the text sparse features.
Then I'm feeding the algorithm to a gradient boosting model and doing the rest of the training and prediction. However I want to know, is there any way I can plot the feature importance, of this sparse matrix and will be able to know the important feature column names?
machine-learning python nlp feature-selection
$endgroup$
bumped to the homepage by Community♦ 2 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
add a comment |
$begingroup$
I was working with a dataset which had a textual column as well as numerical columns, so I used tfidf for textual column and created a sparse matrix, similarly for the numerical features I created a sparse matrix using scipy.sparse.csr_matrix
and combined them with the text sparse features.
Then I'm feeding the algorithm to a gradient boosting model and doing the rest of the training and prediction. However I want to know, is there any way I can plot the feature importance, of this sparse matrix and will be able to know the important feature column names?
machine-learning python nlp feature-selection
$endgroup$
bumped to the homepage by Community♦ 2 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
add a comment |
$begingroup$
I was working with a dataset which had a textual column as well as numerical columns, so I used tfidf for textual column and created a sparse matrix, similarly for the numerical features I created a sparse matrix using scipy.sparse.csr_matrix
and combined them with the text sparse features.
Then I'm feeding the algorithm to a gradient boosting model and doing the rest of the training and prediction. However I want to know, is there any way I can plot the feature importance, of this sparse matrix and will be able to know the important feature column names?
machine-learning python nlp feature-selection
$endgroup$
I was working with a dataset which had a textual column as well as numerical columns, so I used tfidf for textual column and created a sparse matrix, similarly for the numerical features I created a sparse matrix using scipy.sparse.csr_matrix
and combined them with the text sparse features.
Then I'm feeding the algorithm to a gradient boosting model and doing the rest of the training and prediction. However I want to know, is there any way I can plot the feature importance, of this sparse matrix and will be able to know the important feature column names?
machine-learning python nlp feature-selection
machine-learning python nlp feature-selection
asked Jan 20 at 11:48
Debadri DuttaDebadri Dutta
348
348
bumped to the homepage by Community♦ 2 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
bumped to the homepage by Community♦ 2 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
You would have a map of your features from the TFIDF map.
column_names_from_text_features = vectorizer.vocabulary_
rev_dictionary = v:k for k,v in vectorizer.vocabulary_.items()
column_names_from_text_features = [v for k,v in rev_dictionary.items()]
Since you know the column names of your other features, the entire list of features you pass to XGBoost (after the scipy.hstack
) could be
all_columns = column_names_from_text_features + other columns
(or depending on the order in which you horizontally stacked)
Now, once you run the XGBoost Model, you can use the plot_importance function for feature importance. Your code would look something like this:
from xgboost import XGBClassifier, plot_importance
fig, ax = plt.subplots(figsize=(15, 8))
plot_importance(<xgb-classifier>, max_num_features = 15, xlabel='F-score', ylabel='Features', ax=ax)
plt.show()
These features would be labeled fxxx, fyyy etc where xxx and yyy are the indices of the features passed to xgboost.
Using the all_columns
constructed in the first part, you could map the features to in indices in the plot encoding.
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f44283%2fknowing-feature-importance-from-sparse-matrix%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
You would have a map of your features from the TFIDF map.
column_names_from_text_features = vectorizer.vocabulary_
rev_dictionary = v:k for k,v in vectorizer.vocabulary_.items()
column_names_from_text_features = [v for k,v in rev_dictionary.items()]
Since you know the column names of your other features, the entire list of features you pass to XGBoost (after the scipy.hstack
) could be
all_columns = column_names_from_text_features + other columns
(or depending on the order in which you horizontally stacked)
Now, once you run the XGBoost Model, you can use the plot_importance function for feature importance. Your code would look something like this:
from xgboost import XGBClassifier, plot_importance
fig, ax = plt.subplots(figsize=(15, 8))
plot_importance(<xgb-classifier>, max_num_features = 15, xlabel='F-score', ylabel='Features', ax=ax)
plt.show()
These features would be labeled fxxx, fyyy etc where xxx and yyy are the indices of the features passed to xgboost.
Using the all_columns
constructed in the first part, you could map the features to in indices in the plot encoding.
$endgroup$
add a comment |
$begingroup$
You would have a map of your features from the TFIDF map.
column_names_from_text_features = vectorizer.vocabulary_
rev_dictionary = v:k for k,v in vectorizer.vocabulary_.items()
column_names_from_text_features = [v for k,v in rev_dictionary.items()]
Since you know the column names of your other features, the entire list of features you pass to XGBoost (after the scipy.hstack
) could be
all_columns = column_names_from_text_features + other columns
(or depending on the order in which you horizontally stacked)
Now, once you run the XGBoost Model, you can use the plot_importance function for feature importance. Your code would look something like this:
from xgboost import XGBClassifier, plot_importance
fig, ax = plt.subplots(figsize=(15, 8))
plot_importance(<xgb-classifier>, max_num_features = 15, xlabel='F-score', ylabel='Features', ax=ax)
plt.show()
These features would be labeled fxxx, fyyy etc where xxx and yyy are the indices of the features passed to xgboost.
Using the all_columns
constructed in the first part, you could map the features to in indices in the plot encoding.
$endgroup$
add a comment |
$begingroup$
You would have a map of your features from the TFIDF map.
column_names_from_text_features = vectorizer.vocabulary_
rev_dictionary = v:k for k,v in vectorizer.vocabulary_.items()
column_names_from_text_features = [v for k,v in rev_dictionary.items()]
Since you know the column names of your other features, the entire list of features you pass to XGBoost (after the scipy.hstack
) could be
all_columns = column_names_from_text_features + other columns
(or depending on the order in which you horizontally stacked)
Now, once you run the XGBoost Model, you can use the plot_importance function for feature importance. Your code would look something like this:
from xgboost import XGBClassifier, plot_importance
fig, ax = plt.subplots(figsize=(15, 8))
plot_importance(<xgb-classifier>, max_num_features = 15, xlabel='F-score', ylabel='Features', ax=ax)
plt.show()
These features would be labeled fxxx, fyyy etc where xxx and yyy are the indices of the features passed to xgboost.
Using the all_columns
constructed in the first part, you could map the features to in indices in the plot encoding.
$endgroup$
You would have a map of your features from the TFIDF map.
column_names_from_text_features = vectorizer.vocabulary_
rev_dictionary = v:k for k,v in vectorizer.vocabulary_.items()
column_names_from_text_features = [v for k,v in rev_dictionary.items()]
Since you know the column names of your other features, the entire list of features you pass to XGBoost (after the scipy.hstack
) could be
all_columns = column_names_from_text_features + other columns
(or depending on the order in which you horizontally stacked)
Now, once you run the XGBoost Model, you can use the plot_importance function for feature importance. Your code would look something like this:
from xgboost import XGBClassifier, plot_importance
fig, ax = plt.subplots(figsize=(15, 8))
plot_importance(<xgb-classifier>, max_num_features = 15, xlabel='F-score', ylabel='Features', ax=ax)
plt.show()
These features would be labeled fxxx, fyyy etc where xxx and yyy are the indices of the features passed to xgboost.
Using the all_columns
constructed in the first part, you could map the features to in indices in the plot encoding.
edited Feb 28 at 21:54
answered Feb 28 at 21:46
srjitsrjit
1014
1014
add a comment |
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f44283%2fknowing-feature-importance-from-sparse-matrix%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown