Mapping between original feature space and an interpretable feature spaceWhat is dimensionality reduction? What is the difference between feature selection and extraction?Document classification: tf-idf prior to or after feature filtering?What is the difference between feature generation and feature extraction?Is there any difference between feature extraction and feature learning?Feature Extraction and Vector Space ModelFeature extraction from relational databaseFeature selection - QR code localizationFeature Selection Algorithm for Attributes with Logical Relationships (like “AND”)Feature importance over a subset of instance space instead of an entire instance spaceFeature Extraction
Why does nature favour the Laplacian?
Can I criticise the more senior developers around me for not writing clean code?
Which big number is bigger?
Do I have an "anti-research" personality?
Why does Mind Blank stop the Feeblemind spell?
"Whatever a Russian does, they end up making the Kalashnikov gun"? Are there any similar proverbs in English?
Initiative: Do I lose my attack/action if my target moves or dies before my turn in combat?
Extension of 2-adic valuation to the real numbers
What happened to Captain America in Endgame?
How could Tony Stark make this in Endgame?
How come there are so many candidates for the 2020 Democratic party presidential nomination?
What is the smallest unit of eos?
Was there a shared-world project before "Thieves World"?
How exactly does Hawking radiation decrease the mass of black holes?
Why did some of my point & shoot film photos come back with one third light white or orange?
Mistake in years of experience in resume?
Is the claim "Employers won't employ people with no 'social media presence'" realistic?
How to pronounce 'c++' in Spanish
How to limit Drive Letters Windows assigns to new removable USB drives
How to not starve gigantic beasts
Is there any official lore on the Far Realm?
Elements that can bond to themselves?
a sore throat vs a strep throat vs strep throat
Are there physical dangers to preparing a prepared piano?
Mapping between original feature space and an interpretable feature space
What is dimensionality reduction? What is the difference between feature selection and extraction?Document classification: tf-idf prior to or after feature filtering?What is the difference between feature generation and feature extraction?Is there any difference between feature extraction and feature learning?Feature Extraction and Vector Space ModelFeature extraction from relational databaseFeature selection - QR code localizationFeature Selection Algorithm for Attributes with Logical Relationships (like “AND”)Feature importance over a subset of instance space instead of an entire instance spaceFeature Extraction
$begingroup$
I'm reading the following really interesting paper
https://arxiv.org/pdf/1602.04938.pdf on local interpretable model explanations
on page 3 however particularly section 3.3 Sampling for Local Exploration they mention obtaining perturbed samples $z' in 0,1^d'$, it then says
"we recover the sample in the original representation
$z in mathbbR^d$
and obtain $f(z)$ "
with no indication how this is done, surely the map is not injective? If not how would you know you recovered the correct sample? To this end, i wondering how something like this might be done in practice, moving from one feature space $mathbbR^d$ to another $0,1^d'$. I'd really appreciate any help.
machine-learning classification feature-selection feature-extraction research
$endgroup$
bumped to the homepage by Community♦ 40 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
add a comment |
$begingroup$
I'm reading the following really interesting paper
https://arxiv.org/pdf/1602.04938.pdf on local interpretable model explanations
on page 3 however particularly section 3.3 Sampling for Local Exploration they mention obtaining perturbed samples $z' in 0,1^d'$, it then says
"we recover the sample in the original representation
$z in mathbbR^d$
and obtain $f(z)$ "
with no indication how this is done, surely the map is not injective? If not how would you know you recovered the correct sample? To this end, i wondering how something like this might be done in practice, moving from one feature space $mathbbR^d$ to another $0,1^d'$. I'd really appreciate any help.
machine-learning classification feature-selection feature-extraction research
$endgroup$
bumped to the homepage by Community♦ 40 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
add a comment |
$begingroup$
I'm reading the following really interesting paper
https://arxiv.org/pdf/1602.04938.pdf on local interpretable model explanations
on page 3 however particularly section 3.3 Sampling for Local Exploration they mention obtaining perturbed samples $z' in 0,1^d'$, it then says
"we recover the sample in the original representation
$z in mathbbR^d$
and obtain $f(z)$ "
with no indication how this is done, surely the map is not injective? If not how would you know you recovered the correct sample? To this end, i wondering how something like this might be done in practice, moving from one feature space $mathbbR^d$ to another $0,1^d'$. I'd really appreciate any help.
machine-learning classification feature-selection feature-extraction research
$endgroup$
I'm reading the following really interesting paper
https://arxiv.org/pdf/1602.04938.pdf on local interpretable model explanations
on page 3 however particularly section 3.3 Sampling for Local Exploration they mention obtaining perturbed samples $z' in 0,1^d'$, it then says
"we recover the sample in the original representation
$z in mathbbR^d$
and obtain $f(z)$ "
with no indication how this is done, surely the map is not injective? If not how would you know you recovered the correct sample? To this end, i wondering how something like this might be done in practice, moving from one feature space $mathbbR^d$ to another $0,1^d'$. I'd really appreciate any help.
machine-learning classification feature-selection feature-extraction research
machine-learning classification feature-selection feature-extraction research
edited Oct 26 '18 at 11:42
iaaml
asked Oct 26 '18 at 11:24
iaamliaaml
63
63
bumped to the homepage by Community♦ 40 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
bumped to the homepage by Community♦ 40 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
Welcome to the community @iaaml! I hope I understood the concept right by briefly going through your reference. This is my impression:
in 3.1, they say
For example, a possible interpretable representation for text
classification is a binary vector indicating the presence or absence of
a word.
So, I suppose the point is something like Sparse Representation (you may look for sparse representation to see the methods and examples). For instance, an image vector which is predicted as cat can be explained by a 3d explainer, namely, nose, glasses and mouth. Nose and mouth are evidence for cat while glasses is a human face feature (very naive example). So having a vocabulary for all explainers, you can come up with a sparse representation of each decision in which, significant criteria are represented with 0 or 1 to help validating prediction. It happens by examining the features which contribute the most to that class (that's why in Figure.1 they could understand that "No Fatigue" observation is against the prediction).
To obtain such representation, you can build the vocabulary (a set of features which are significant and all together cover whole or most of the space). Then you map your data on these space in which presence of each element of vocabulary is 0 or 1.
Example
I have three samples and their corresponding predictions:
a) Spanish is the main language in Buenos Aires :Argentina
b) Apple released its new software :IT
c) Apple is the man agriculture product in Buenos Aires :Argentina
Using BoW for constructing the feature space, Apple becomes an important feature as it is a famous IT company, but in the last sentence it affects in a wrong way. Alongside, you can also have a map of which feature contributes the most to which class (let's say through Mutual Information or any other feature ranking method, specified on different classes) and construct the matrices below:
BuenosAires Apple
a 1 0
b 0 1
c 1 1
when you have what should happen according to the feature ranking for each class:
BuenosAires Apple
a 1 0
b 0 1
c 1 0
comparing these two gives you the probably-wrong Apple in last sentence (like "No Fatigue" in Figure.1). The first matrix is the mapping that you do, and the second is the mapping that feature ranking gives you.
Hope I understood it right!
$endgroup$
$begingroup$
Hi Kasra, I think i understand the mapping from lets say sentences to the bag of words example this is the mapping $x in mathbbR^d$ to $x' in 0,1^d'$. What i'm confused with is that the space that gets perturbed is $0,1^d'$ in a sense allowing us to create synthetic data, but then to evaluate the a model in this new space given some $z'$ from the perturbation how do we find the original $z$? So i guess in this case it would be like saying given a bag of words representation how would i know what the original sentence looked like? Correct me if you actually did explain this.
$endgroup$
– iaaml
Oct 26 '18 at 13:55
$begingroup$
I just looked into the sparse matrix representation, it looks like although there are infinitely many solutions they find a way to get back to the original space so would that be the solution?
$endgroup$
– iaaml
Oct 26 '18 at 14:03
$begingroup$
Yes but here I have the same confusion. The fact is that if they choose a fraction of features, then they can not recover the point in the original space as they cut the information. I think they choose a fraction of features and set others to zero. In this case you can have a representation which approximates a desired neighborhood in the original space. Or they purturb a fraction of features WHILE KEEPING THE REST. So concatenating new samples with rest of original features takes them back to the original space.
$endgroup$
– Kasra Manshaei
Oct 26 '18 at 14:19
$begingroup$
I admit it was not an easy question. Just tried it. Seems we need to sleep on it more.
$endgroup$
– Kasra Manshaei
Oct 26 '18 at 14:19
1
$begingroup$
Thanks anyway! Look forward to finding a potential solution
$endgroup$
– iaaml
Oct 26 '18 at 15:10
|
show 1 more comment
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f40267%2fmapping-between-original-feature-space-and-an-interpretable-feature-space%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Welcome to the community @iaaml! I hope I understood the concept right by briefly going through your reference. This is my impression:
in 3.1, they say
For example, a possible interpretable representation for text
classification is a binary vector indicating the presence or absence of
a word.
So, I suppose the point is something like Sparse Representation (you may look for sparse representation to see the methods and examples). For instance, an image vector which is predicted as cat can be explained by a 3d explainer, namely, nose, glasses and mouth. Nose and mouth are evidence for cat while glasses is a human face feature (very naive example). So having a vocabulary for all explainers, you can come up with a sparse representation of each decision in which, significant criteria are represented with 0 or 1 to help validating prediction. It happens by examining the features which contribute the most to that class (that's why in Figure.1 they could understand that "No Fatigue" observation is against the prediction).
To obtain such representation, you can build the vocabulary (a set of features which are significant and all together cover whole or most of the space). Then you map your data on these space in which presence of each element of vocabulary is 0 or 1.
Example
I have three samples and their corresponding predictions:
a) Spanish is the main language in Buenos Aires :Argentina
b) Apple released its new software :IT
c) Apple is the man agriculture product in Buenos Aires :Argentina
Using BoW for constructing the feature space, Apple becomes an important feature as it is a famous IT company, but in the last sentence it affects in a wrong way. Alongside, you can also have a map of which feature contributes the most to which class (let's say through Mutual Information or any other feature ranking method, specified on different classes) and construct the matrices below:
BuenosAires Apple
a 1 0
b 0 1
c 1 1
when you have what should happen according to the feature ranking for each class:
BuenosAires Apple
a 1 0
b 0 1
c 1 0
comparing these two gives you the probably-wrong Apple in last sentence (like "No Fatigue" in Figure.1). The first matrix is the mapping that you do, and the second is the mapping that feature ranking gives you.
Hope I understood it right!
$endgroup$
$begingroup$
Hi Kasra, I think i understand the mapping from lets say sentences to the bag of words example this is the mapping $x in mathbbR^d$ to $x' in 0,1^d'$. What i'm confused with is that the space that gets perturbed is $0,1^d'$ in a sense allowing us to create synthetic data, but then to evaluate the a model in this new space given some $z'$ from the perturbation how do we find the original $z$? So i guess in this case it would be like saying given a bag of words representation how would i know what the original sentence looked like? Correct me if you actually did explain this.
$endgroup$
– iaaml
Oct 26 '18 at 13:55
$begingroup$
I just looked into the sparse matrix representation, it looks like although there are infinitely many solutions they find a way to get back to the original space so would that be the solution?
$endgroup$
– iaaml
Oct 26 '18 at 14:03
$begingroup$
Yes but here I have the same confusion. The fact is that if they choose a fraction of features, then they can not recover the point in the original space as they cut the information. I think they choose a fraction of features and set others to zero. In this case you can have a representation which approximates a desired neighborhood in the original space. Or they purturb a fraction of features WHILE KEEPING THE REST. So concatenating new samples with rest of original features takes them back to the original space.
$endgroup$
– Kasra Manshaei
Oct 26 '18 at 14:19
$begingroup$
I admit it was not an easy question. Just tried it. Seems we need to sleep on it more.
$endgroup$
– Kasra Manshaei
Oct 26 '18 at 14:19
1
$begingroup$
Thanks anyway! Look forward to finding a potential solution
$endgroup$
– iaaml
Oct 26 '18 at 15:10
|
show 1 more comment
$begingroup$
Welcome to the community @iaaml! I hope I understood the concept right by briefly going through your reference. This is my impression:
in 3.1, they say
For example, a possible interpretable representation for text
classification is a binary vector indicating the presence or absence of
a word.
So, I suppose the point is something like Sparse Representation (you may look for sparse representation to see the methods and examples). For instance, an image vector which is predicted as cat can be explained by a 3d explainer, namely, nose, glasses and mouth. Nose and mouth are evidence for cat while glasses is a human face feature (very naive example). So having a vocabulary for all explainers, you can come up with a sparse representation of each decision in which, significant criteria are represented with 0 or 1 to help validating prediction. It happens by examining the features which contribute the most to that class (that's why in Figure.1 they could understand that "No Fatigue" observation is against the prediction).
To obtain such representation, you can build the vocabulary (a set of features which are significant and all together cover whole or most of the space). Then you map your data on these space in which presence of each element of vocabulary is 0 or 1.
Example
I have three samples and their corresponding predictions:
a) Spanish is the main language in Buenos Aires :Argentina
b) Apple released its new software :IT
c) Apple is the man agriculture product in Buenos Aires :Argentina
Using BoW for constructing the feature space, Apple becomes an important feature as it is a famous IT company, but in the last sentence it affects in a wrong way. Alongside, you can also have a map of which feature contributes the most to which class (let's say through Mutual Information or any other feature ranking method, specified on different classes) and construct the matrices below:
BuenosAires Apple
a 1 0
b 0 1
c 1 1
when you have what should happen according to the feature ranking for each class:
BuenosAires Apple
a 1 0
b 0 1
c 1 0
comparing these two gives you the probably-wrong Apple in last sentence (like "No Fatigue" in Figure.1). The first matrix is the mapping that you do, and the second is the mapping that feature ranking gives you.
Hope I understood it right!
$endgroup$
$begingroup$
Hi Kasra, I think i understand the mapping from lets say sentences to the bag of words example this is the mapping $x in mathbbR^d$ to $x' in 0,1^d'$. What i'm confused with is that the space that gets perturbed is $0,1^d'$ in a sense allowing us to create synthetic data, but then to evaluate the a model in this new space given some $z'$ from the perturbation how do we find the original $z$? So i guess in this case it would be like saying given a bag of words representation how would i know what the original sentence looked like? Correct me if you actually did explain this.
$endgroup$
– iaaml
Oct 26 '18 at 13:55
$begingroup$
I just looked into the sparse matrix representation, it looks like although there are infinitely many solutions they find a way to get back to the original space so would that be the solution?
$endgroup$
– iaaml
Oct 26 '18 at 14:03
$begingroup$
Yes but here I have the same confusion. The fact is that if they choose a fraction of features, then they can not recover the point in the original space as they cut the information. I think they choose a fraction of features and set others to zero. In this case you can have a representation which approximates a desired neighborhood in the original space. Or they purturb a fraction of features WHILE KEEPING THE REST. So concatenating new samples with rest of original features takes them back to the original space.
$endgroup$
– Kasra Manshaei
Oct 26 '18 at 14:19
$begingroup$
I admit it was not an easy question. Just tried it. Seems we need to sleep on it more.
$endgroup$
– Kasra Manshaei
Oct 26 '18 at 14:19
1
$begingroup$
Thanks anyway! Look forward to finding a potential solution
$endgroup$
– iaaml
Oct 26 '18 at 15:10
|
show 1 more comment
$begingroup$
Welcome to the community @iaaml! I hope I understood the concept right by briefly going through your reference. This is my impression:
in 3.1, they say
For example, a possible interpretable representation for text
classification is a binary vector indicating the presence or absence of
a word.
So, I suppose the point is something like Sparse Representation (you may look for sparse representation to see the methods and examples). For instance, an image vector which is predicted as cat can be explained by a 3d explainer, namely, nose, glasses and mouth. Nose and mouth are evidence for cat while glasses is a human face feature (very naive example). So having a vocabulary for all explainers, you can come up with a sparse representation of each decision in which, significant criteria are represented with 0 or 1 to help validating prediction. It happens by examining the features which contribute the most to that class (that's why in Figure.1 they could understand that "No Fatigue" observation is against the prediction).
To obtain such representation, you can build the vocabulary (a set of features which are significant and all together cover whole or most of the space). Then you map your data on these space in which presence of each element of vocabulary is 0 or 1.
Example
I have three samples and their corresponding predictions:
a) Spanish is the main language in Buenos Aires :Argentina
b) Apple released its new software :IT
c) Apple is the man agriculture product in Buenos Aires :Argentina
Using BoW for constructing the feature space, Apple becomes an important feature as it is a famous IT company, but in the last sentence it affects in a wrong way. Alongside, you can also have a map of which feature contributes the most to which class (let's say through Mutual Information or any other feature ranking method, specified on different classes) and construct the matrices below:
BuenosAires Apple
a 1 0
b 0 1
c 1 1
when you have what should happen according to the feature ranking for each class:
BuenosAires Apple
a 1 0
b 0 1
c 1 0
comparing these two gives you the probably-wrong Apple in last sentence (like "No Fatigue" in Figure.1). The first matrix is the mapping that you do, and the second is the mapping that feature ranking gives you.
Hope I understood it right!
$endgroup$
Welcome to the community @iaaml! I hope I understood the concept right by briefly going through your reference. This is my impression:
in 3.1, they say
For example, a possible interpretable representation for text
classification is a binary vector indicating the presence or absence of
a word.
So, I suppose the point is something like Sparse Representation (you may look for sparse representation to see the methods and examples). For instance, an image vector which is predicted as cat can be explained by a 3d explainer, namely, nose, glasses and mouth. Nose and mouth are evidence for cat while glasses is a human face feature (very naive example). So having a vocabulary for all explainers, you can come up with a sparse representation of each decision in which, significant criteria are represented with 0 or 1 to help validating prediction. It happens by examining the features which contribute the most to that class (that's why in Figure.1 they could understand that "No Fatigue" observation is against the prediction).
To obtain such representation, you can build the vocabulary (a set of features which are significant and all together cover whole or most of the space). Then you map your data on these space in which presence of each element of vocabulary is 0 or 1.
Example
I have three samples and their corresponding predictions:
a) Spanish is the main language in Buenos Aires :Argentina
b) Apple released its new software :IT
c) Apple is the man agriculture product in Buenos Aires :Argentina
Using BoW for constructing the feature space, Apple becomes an important feature as it is a famous IT company, but in the last sentence it affects in a wrong way. Alongside, you can also have a map of which feature contributes the most to which class (let's say through Mutual Information or any other feature ranking method, specified on different classes) and construct the matrices below:
BuenosAires Apple
a 1 0
b 0 1
c 1 1
when you have what should happen according to the feature ranking for each class:
BuenosAires Apple
a 1 0
b 0 1
c 1 0
comparing these two gives you the probably-wrong Apple in last sentence (like "No Fatigue" in Figure.1). The first matrix is the mapping that you do, and the second is the mapping that feature ranking gives you.
Hope I understood it right!
answered Oct 26 '18 at 12:04
Kasra ManshaeiKasra Manshaei
3,8171135
3,8171135
$begingroup$
Hi Kasra, I think i understand the mapping from lets say sentences to the bag of words example this is the mapping $x in mathbbR^d$ to $x' in 0,1^d'$. What i'm confused with is that the space that gets perturbed is $0,1^d'$ in a sense allowing us to create synthetic data, but then to evaluate the a model in this new space given some $z'$ from the perturbation how do we find the original $z$? So i guess in this case it would be like saying given a bag of words representation how would i know what the original sentence looked like? Correct me if you actually did explain this.
$endgroup$
– iaaml
Oct 26 '18 at 13:55
$begingroup$
I just looked into the sparse matrix representation, it looks like although there are infinitely many solutions they find a way to get back to the original space so would that be the solution?
$endgroup$
– iaaml
Oct 26 '18 at 14:03
$begingroup$
Yes but here I have the same confusion. The fact is that if they choose a fraction of features, then they can not recover the point in the original space as they cut the information. I think they choose a fraction of features and set others to zero. In this case you can have a representation which approximates a desired neighborhood in the original space. Or they purturb a fraction of features WHILE KEEPING THE REST. So concatenating new samples with rest of original features takes them back to the original space.
$endgroup$
– Kasra Manshaei
Oct 26 '18 at 14:19
$begingroup$
I admit it was not an easy question. Just tried it. Seems we need to sleep on it more.
$endgroup$
– Kasra Manshaei
Oct 26 '18 at 14:19
1
$begingroup$
Thanks anyway! Look forward to finding a potential solution
$endgroup$
– iaaml
Oct 26 '18 at 15:10
|
show 1 more comment
$begingroup$
Hi Kasra, I think i understand the mapping from lets say sentences to the bag of words example this is the mapping $x in mathbbR^d$ to $x' in 0,1^d'$. What i'm confused with is that the space that gets perturbed is $0,1^d'$ in a sense allowing us to create synthetic data, but then to evaluate the a model in this new space given some $z'$ from the perturbation how do we find the original $z$? So i guess in this case it would be like saying given a bag of words representation how would i know what the original sentence looked like? Correct me if you actually did explain this.
$endgroup$
– iaaml
Oct 26 '18 at 13:55
$begingroup$
I just looked into the sparse matrix representation, it looks like although there are infinitely many solutions they find a way to get back to the original space so would that be the solution?
$endgroup$
– iaaml
Oct 26 '18 at 14:03
$begingroup$
Yes but here I have the same confusion. The fact is that if they choose a fraction of features, then they can not recover the point in the original space as they cut the information. I think they choose a fraction of features and set others to zero. In this case you can have a representation which approximates a desired neighborhood in the original space. Or they purturb a fraction of features WHILE KEEPING THE REST. So concatenating new samples with rest of original features takes them back to the original space.
$endgroup$
– Kasra Manshaei
Oct 26 '18 at 14:19
$begingroup$
I admit it was not an easy question. Just tried it. Seems we need to sleep on it more.
$endgroup$
– Kasra Manshaei
Oct 26 '18 at 14:19
1
$begingroup$
Thanks anyway! Look forward to finding a potential solution
$endgroup$
– iaaml
Oct 26 '18 at 15:10
$begingroup$
Hi Kasra, I think i understand the mapping from lets say sentences to the bag of words example this is the mapping $x in mathbbR^d$ to $x' in 0,1^d'$. What i'm confused with is that the space that gets perturbed is $0,1^d'$ in a sense allowing us to create synthetic data, but then to evaluate the a model in this new space given some $z'$ from the perturbation how do we find the original $z$? So i guess in this case it would be like saying given a bag of words representation how would i know what the original sentence looked like? Correct me if you actually did explain this.
$endgroup$
– iaaml
Oct 26 '18 at 13:55
$begingroup$
Hi Kasra, I think i understand the mapping from lets say sentences to the bag of words example this is the mapping $x in mathbbR^d$ to $x' in 0,1^d'$. What i'm confused with is that the space that gets perturbed is $0,1^d'$ in a sense allowing us to create synthetic data, but then to evaluate the a model in this new space given some $z'$ from the perturbation how do we find the original $z$? So i guess in this case it would be like saying given a bag of words representation how would i know what the original sentence looked like? Correct me if you actually did explain this.
$endgroup$
– iaaml
Oct 26 '18 at 13:55
$begingroup$
I just looked into the sparse matrix representation, it looks like although there are infinitely many solutions they find a way to get back to the original space so would that be the solution?
$endgroup$
– iaaml
Oct 26 '18 at 14:03
$begingroup$
I just looked into the sparse matrix representation, it looks like although there are infinitely many solutions they find a way to get back to the original space so would that be the solution?
$endgroup$
– iaaml
Oct 26 '18 at 14:03
$begingroup$
Yes but here I have the same confusion. The fact is that if they choose a fraction of features, then they can not recover the point in the original space as they cut the information. I think they choose a fraction of features and set others to zero. In this case you can have a representation which approximates a desired neighborhood in the original space. Or they purturb a fraction of features WHILE KEEPING THE REST. So concatenating new samples with rest of original features takes them back to the original space.
$endgroup$
– Kasra Manshaei
Oct 26 '18 at 14:19
$begingroup$
Yes but here I have the same confusion. The fact is that if they choose a fraction of features, then they can not recover the point in the original space as they cut the information. I think they choose a fraction of features and set others to zero. In this case you can have a representation which approximates a desired neighborhood in the original space. Or they purturb a fraction of features WHILE KEEPING THE REST. So concatenating new samples with rest of original features takes them back to the original space.
$endgroup$
– Kasra Manshaei
Oct 26 '18 at 14:19
$begingroup$
I admit it was not an easy question. Just tried it. Seems we need to sleep on it more.
$endgroup$
– Kasra Manshaei
Oct 26 '18 at 14:19
$begingroup$
I admit it was not an easy question. Just tried it. Seems we need to sleep on it more.
$endgroup$
– Kasra Manshaei
Oct 26 '18 at 14:19
1
1
$begingroup$
Thanks anyway! Look forward to finding a potential solution
$endgroup$
– iaaml
Oct 26 '18 at 15:10
$begingroup$
Thanks anyway! Look forward to finding a potential solution
$endgroup$
– iaaml
Oct 26 '18 at 15:10
|
show 1 more comment
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f40267%2fmapping-between-original-feature-space-and-an-interpretable-feature-space%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown