Decision Tree: Efficient splitting of nodes, minimize number of gini evaluationsDecision Stumps with same value leaf nodesWhy is the number of samples smaller than the number of values in my decision tree?Ordinal feature in decision treeDecision tree classifier: possible overfittingDecision tree orderingMulticollinearity in Decision TreeDisadvantage of decision treeHow to come up with the splitting point in a decision tree?Gini Index in Regression Decision Treehow does splitting occur at a node in a decision-tree with non-categorical data?
Options leqno, reqno for documentclass or exist another option?
Is thermodynamics only applicable to systems in equilibrium?
Counterexample: a pair of linearly ordered sets that are isomorphic to subsets of the other, but not isomorphic between them
You look catfish vs You look like a catfish
How to figure out whether the data is sample data or population data apart from the client's information?
Why does processed meat contain preservatives, while canned fish needs not?
Can fracking help reduce CO2?
Can someone publish a story that happened to you?
Past Perfect Tense
Single Colour Mastermind Problem
Why is the origin of “threshold” uncertain?
Subtleties of choosing the sequence of tenses in Russian
A non-technological, repeating, visible object in the sky, holding its position in the sky for hours
Does a creature that is immune to a condition still make a saving throw?
Any examples of headwear for races with animal ears?
Help, my Death Star suffers from Kessler syndrome!
What's the metal clinking sound at the end of credits in Avengers: Endgame?
Why do computer-science majors learn calculus?
How to creep the reader out with what seems like a normal person?
Where did the extra Pym particles come from in Endgame?
Phrase for the opposite of "foolproof"
What is a Recurrent Neural Network?
Do I have to worry about players making “bad” choices on level up?
Pythonic way to find the last position in a string not matching a regex
Decision Tree: Efficient splitting of nodes, minimize number of gini evaluations
Decision Stumps with same value leaf nodesWhy is the number of samples smaller than the number of values in my decision tree?Ordinal feature in decision treeDecision tree classifier: possible overfittingDecision tree orderingMulticollinearity in Decision TreeDisadvantage of decision treeHow to come up with the splitting point in a decision tree?Gini Index in Regression Decision Treehow does splitting occur at a node in a decision-tree with non-categorical data?
$begingroup$
I have a dataset specific problem where i need to use a splitting function other than gini_index. This requires me to re-write a decision tree from scratch. I have a working model, but itis highly inefficient.
To make a split i currently iterate though each feature and then through each unique datapoint in that dataset for each node (total of nodes x features x unique levels gini evaluations). Cause of this my DT on a 300k X 145 dataset has been running for 2 days.
How can I cut down on the number of splitting evaluations, or speed up the program. I read Fisher Yates algorithm in Sklean's code, but I don't understand the logic. Any help would be appreciated.
scikit-learn decision-trees
$endgroup$
bumped to the homepage by Community♦ 17 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
add a comment |
$begingroup$
I have a dataset specific problem where i need to use a splitting function other than gini_index. This requires me to re-write a decision tree from scratch. I have a working model, but itis highly inefficient.
To make a split i currently iterate though each feature and then through each unique datapoint in that dataset for each node (total of nodes x features x unique levels gini evaluations). Cause of this my DT on a 300k X 145 dataset has been running for 2 days.
How can I cut down on the number of splitting evaluations, or speed up the program. I read Fisher Yates algorithm in Sklean's code, but I don't understand the logic. Any help would be appreciated.
scikit-learn decision-trees
$endgroup$
bumped to the homepage by Community♦ 17 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
add a comment |
$begingroup$
I have a dataset specific problem where i need to use a splitting function other than gini_index. This requires me to re-write a decision tree from scratch. I have a working model, but itis highly inefficient.
To make a split i currently iterate though each feature and then through each unique datapoint in that dataset for each node (total of nodes x features x unique levels gini evaluations). Cause of this my DT on a 300k X 145 dataset has been running for 2 days.
How can I cut down on the number of splitting evaluations, or speed up the program. I read Fisher Yates algorithm in Sklean's code, but I don't understand the logic. Any help would be appreciated.
scikit-learn decision-trees
$endgroup$
I have a dataset specific problem where i need to use a splitting function other than gini_index. This requires me to re-write a decision tree from scratch. I have a working model, but itis highly inefficient.
To make a split i currently iterate though each feature and then through each unique datapoint in that dataset for each node (total of nodes x features x unique levels gini evaluations). Cause of this my DT on a 300k X 145 dataset has been running for 2 days.
How can I cut down on the number of splitting evaluations, or speed up the program. I read Fisher Yates algorithm in Sklean's code, but I don't understand the logic. Any help would be appreciated.
scikit-learn decision-trees
scikit-learn decision-trees
asked Oct 29 '18 at 14:19
ArslánArslán
1112
1112
bumped to the homepage by Community♦ 17 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
bumped to the homepage by Community♦ 17 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
In general, to reduce the amount of time needed to run your dataset through the See4.5 (C4.5) algorithm, you'll want to reduce the number of nodes in the tree needing to be processed.
This can be done utilizing pruning, optimal operator selection, and incorporating a heuristic into your decision tree search.
Alpha-beta pruning, bidirectional search, and the Minmax algorithm for operator selection are a good choice when it comes to decision-tree time reduction.
I'm not going to write an entire book here, however look into artificial intelligence and see what they've accomplished so far. Alot of it needs tweaking because they keep switching (but that's another story), however, if you come across any books saying bidirectional search is in any way non-optimal, just ignore that because it's inherent that researchers can't code that well.
A good implementation of the Gini algorithm in practical use is available through Ross Quinlan's website. If you look in to and understand the C5.0 source code, you should be on decision tree research level as there isn't a clear explanation available online as far as I can tell detailing the new algorithm's additions.
$endgroup$
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f40387%2fdecision-tree-efficient-splitting-of-nodes-minimize-number-of-gini-evaluations%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
In general, to reduce the amount of time needed to run your dataset through the See4.5 (C4.5) algorithm, you'll want to reduce the number of nodes in the tree needing to be processed.
This can be done utilizing pruning, optimal operator selection, and incorporating a heuristic into your decision tree search.
Alpha-beta pruning, bidirectional search, and the Minmax algorithm for operator selection are a good choice when it comes to decision-tree time reduction.
I'm not going to write an entire book here, however look into artificial intelligence and see what they've accomplished so far. Alot of it needs tweaking because they keep switching (but that's another story), however, if you come across any books saying bidirectional search is in any way non-optimal, just ignore that because it's inherent that researchers can't code that well.
A good implementation of the Gini algorithm in practical use is available through Ross Quinlan's website. If you look in to and understand the C5.0 source code, you should be on decision tree research level as there isn't a clear explanation available online as far as I can tell detailing the new algorithm's additions.
$endgroup$
add a comment |
$begingroup$
In general, to reduce the amount of time needed to run your dataset through the See4.5 (C4.5) algorithm, you'll want to reduce the number of nodes in the tree needing to be processed.
This can be done utilizing pruning, optimal operator selection, and incorporating a heuristic into your decision tree search.
Alpha-beta pruning, bidirectional search, and the Minmax algorithm for operator selection are a good choice when it comes to decision-tree time reduction.
I'm not going to write an entire book here, however look into artificial intelligence and see what they've accomplished so far. Alot of it needs tweaking because they keep switching (but that's another story), however, if you come across any books saying bidirectional search is in any way non-optimal, just ignore that because it's inherent that researchers can't code that well.
A good implementation of the Gini algorithm in practical use is available through Ross Quinlan's website. If you look in to and understand the C5.0 source code, you should be on decision tree research level as there isn't a clear explanation available online as far as I can tell detailing the new algorithm's additions.
$endgroup$
add a comment |
$begingroup$
In general, to reduce the amount of time needed to run your dataset through the See4.5 (C4.5) algorithm, you'll want to reduce the number of nodes in the tree needing to be processed.
This can be done utilizing pruning, optimal operator selection, and incorporating a heuristic into your decision tree search.
Alpha-beta pruning, bidirectional search, and the Minmax algorithm for operator selection are a good choice when it comes to decision-tree time reduction.
I'm not going to write an entire book here, however look into artificial intelligence and see what they've accomplished so far. Alot of it needs tweaking because they keep switching (but that's another story), however, if you come across any books saying bidirectional search is in any way non-optimal, just ignore that because it's inherent that researchers can't code that well.
A good implementation of the Gini algorithm in practical use is available through Ross Quinlan's website. If you look in to and understand the C5.0 source code, you should be on decision tree research level as there isn't a clear explanation available online as far as I can tell detailing the new algorithm's additions.
$endgroup$
In general, to reduce the amount of time needed to run your dataset through the See4.5 (C4.5) algorithm, you'll want to reduce the number of nodes in the tree needing to be processed.
This can be done utilizing pruning, optimal operator selection, and incorporating a heuristic into your decision tree search.
Alpha-beta pruning, bidirectional search, and the Minmax algorithm for operator selection are a good choice when it comes to decision-tree time reduction.
I'm not going to write an entire book here, however look into artificial intelligence and see what they've accomplished so far. Alot of it needs tweaking because they keep switching (but that's another story), however, if you come across any books saying bidirectional search is in any way non-optimal, just ignore that because it's inherent that researchers can't code that well.
A good implementation of the Gini algorithm in practical use is available through Ross Quinlan's website. If you look in to and understand the C5.0 source code, you should be on decision tree research level as there isn't a clear explanation available online as far as I can tell detailing the new algorithm's additions.
answered Oct 29 '18 at 17:28
Andre PattersonAndre Patterson
113
113
add a comment |
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f40387%2fdecision-tree-efficient-splitting-of-nodes-minimize-number-of-gini-evaluations%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown