Decision Tree Algorithm
Decision Tree is a Supervised literacy manner that can be used for both group and Reversion cases, but mostly it's preferred for solving Set problems. It's a tree-structured classifier, where interior bumps represent the features of a dataset, branches character the decision rules and each slice bump represents the outcome.
In a Decision tree, there are two nodes, which are the Decision Nodule and Leaf Node. Decision nodules are used to make any decision and have multiple branches, whereas Leaf nodules are the output of those judgments and don't contain any fresh branches.
The diagnoses or the test are performed on the keystone of features of the given dataset.
It's a graphical representation for getting all the possible results to a problem/ deliverance predicated on given conditions.
It's called a award tree because, matching to a tree, it starts with the root nodule, which expands on fresh branches and constructs a tree-parallel structure.
In order to confect a tree, we use the Wagon algorithm, which stands for Category and Retrogression Tree algorithm.
A decision tree simply asks a question, and grounded on the comeback (Yes/ No), it further disassociate the tree into subtrees.
Below figure explains the general structure of a decision tree
Comments
Post a Comment