- 浏览: 77619 次
- 性别:
- 来自: 拜月神教
最新评论
-
huangfenghit:
你绝对的大牛~
答复: 阿里巴巴面试感言 -
liuxuejin:
好!慢慢下载来看看
区间树 -
xiaobian:
看不懂,怎么知道是讲的好呢 ?
数据挖掘 决策树ID3算法原理 -
longay00:
不错,很牛,不过没有原理与实验很难相信它的正确性。从代码上看, ...
决策树C4.5算法 -
yangguo:
用我的study方法就可以了。
答复: java最优算法讨论
上一篇博客写了ID3算法的简单实现
这一篇讲讲ID3的原理
写这个算法是由于某同事的同学的毕业设计,关系够复杂的了==|||,写完这个算法,突然对数据挖掘有了兴趣,决定把C4.5,C5.0算法也一并实现,并且再研究一下数据挖掘的分类算法
其实这篇原理,没有我自己的内容。。。引用某人blog的东东吧(我本人倒是很反感抄袭的)
首先奉上blog作者:神威异度
虽然未曾与之交谈,不过经历千辛万苦的搜索之后,终于在他的blog发现了有价值的东西(这里要提一下,想要在国内搜索出有价值的东西真不容易,到处充斥着转载,小小鄙视一下我自己先),在这里万分感谢神威异度同学
奉上blog链接:http://www.blog.edu.cn/user2/huangbo929/archives/2006/1533249.shtml
再不厌其烦的把人家的东西copy到我这里。
决策树生成原理
Abstract
This paper details the ID3 classification algorithm. Very simply, ID3 builds a decision tree from a fixed set of examples. The resulting tree is used to classify future samples. The example has several attributes and belongs to a class (like yes or no). The leaf nodes of the decision tree contain the class name whereas a non-leaf node is a decision node. The decision node is an attribute test with each branch (to another decision tree) being a possible value of the attribute. ID3 uses information gain to help it decide which attribute goes into a decision node. The advantage of learning a decision tree is that a program, rather than a knowledge engineer, elicits knowledge from an expert.
Introduction
J. Ross Quinlan originally developed ID3 at the University of Sydney. He first presented ID3 in 1975 in a book, Machine Learning, vol. 1, no. 1. ID3 is based off the Concept Learning System (CLS) algorithm. The basic CLS algorithm over a set of training instances C:
Step 1: If all instances in C are positive, then create YES node and halt.
If all instances in C are negative, create a NO node and halt.
Otherwise select a feature, F with values v1, ..., vn and create a decision node.
Step 2: Partition the training instances in C into subsets C1, C2, ..., Cn according to the values of V.
Step 3: apply the algorithm recursively to each of the sets Ci.
Note, the trainer (the expert) decides which feature to select.
ID3 improves on CLS by adding a feature selection heuristic. ID3 searches through the attributes of the training instances and extracts the attribute that best separates the given examples. If the attribute perfectly classifies the training sets then ID3 stops; otherwise it recursively operates on the n (where n = number of possible values of an attribute) partitioned subsets to get their "best" attribute. The algorithm uses a greedy search, that is, it picks the best attribute and never looks back to reconsider earlier choices.
Discussion
ID3 is a nonincremental algorithm, meaning it derives its classes from a fixed set of training instances. An incremental algorithm revises the current concept definition, if necessary, with a new sample. The classes created by ID3 are inductive, that is, given a small set of training instances, the specific classes created by ID3 are expected to work for all future instances. The distribution of the unknowns must be the same as the test cases. Induction classes cannot be proven to work in every case since they may classify an infinite number of instances. Note that ID3 (or any inductive algorithm) may misclassify data.
Data Description
The sample data used by ID3 has certain requirements, which are:
Attribute-value description - the same attributes must describe each example and have a fixed number of values.
Predefined classes - an example's attributes must already be defined, that is, they are not learned by ID3.
Discrete classes - classes must be sharply delineated. Continuous classes broken up into vague categories such as a metal being "hard, quite hard, flexible, soft, quite soft" are suspect.
Sufficient examples - since inductive generalization is used (i.e. not provable) there must be enough test cases to distinguish valid patterns from chance occurrences.
Attribute Selection
How does ID3 decide which attribute is the best? A statistical property, called information gain, is used. Gain measures how well a given attribute separates training examples into targeted classes. The one with the highest information (information being the most useful for classification) is selected. In order to define gain, we first borrow an idea from information theory called entropy. Entropy measures the amount of information in an attribute.
Given a collection S of c outcomes
Entropy(S) = S -p(I) log2 p(I)
where p(I) is the proportion of S belonging to class I. S is over c. Log2 is log base 2.
Note that S is not an attribute but the entire sample set.
Example 1
If S is a collection of 14 examples with 9 YES and 5 NO examples then
Entropy(S) = - (9/14) Log2 (9/14) - (5/14) Log2 (5/14) = 0.940
Notice entropy is 0 if all members of S belong to the same class (the data is perfectly classified). The range of entropy is 0 ("perfectly classified") to 1 ("totally random").
Gain(S, A) is information gain of example set S on attribute A is defined as
Gain(S, A) = Entropy(S) - S ((|Sv| / |S|) * Entropy(Sv))
Where:
S is each value v of all possible values of attribute A
Sv = subset of S for which attribute A has value v
|Sv| = number of elements in Sv
|S| = number of elements in S
Example 2
Suppose S is a set of 14 examples in which one of the attributes is wind speed. The values of Wind can be Weak or Strong. The classification of these 14 examples are 9 YES and 5 NO. For attribute Wind, suppose there are 8 occurrences of Wind = Weak and 6 occurrences of Wind = Strong. For Wind = Weak, 6 of the examples are YES and 2 are NO. For Wind = Strong, 3 are YES and 3 are NO. Therefore
Gain(S,Wind)=Entropy(S)-(8/14)*Entropy(Sweak)-(6/14)*Entropy(Sstrong)
= 0.940 - (8/14)*0.811 - (6/14)*1.00
= 0.048
Entropy(Sweak) = - (6/8)*log2(6/8) - (2/8)*log2(2/8) = 0.811
Entropy(Sstrong) = - (3/6)*log2(3/6) - (3/6)*log2(3/6) = 1.00
For each attribute, the gain is calculated and the highest gain is used in the decision node.
Example of ID3
Suppose we want ID3 to decide whether the weather is amenable to playing baseball. Over the course of 2 weeks, data is collected to help ID3 build a decision tree (see table 1).
The target classification is "should we play baseball?" which can be yes or no.
The weather attributes are outlook, temperature, humidity, and wind speed. They can have the following values:
outlook = { sunny, overcast, rain }
temperature = {hot, mild, cool }
humidity = { high, normal }
wind = {weak, strong }
Examples of set S are:
Day Outlook Temperature Humidity Wind Play ball
D1 Sunny Hot High Weak No
D2 Sunny Hot High Strong No
D3 Overcast Hot High Weak Yes
D4 Rain Mild High Weak Yes
D5 Rain Cool Normal Weak Yes
D6 Rain Cool Normal Strong No
D7 Overcast Cool Normal Strong Yes
D8 Sunny Mild High Weak No
D9 Sunny Cool Normal Weak Yes
D10 Rain Mild Normal Weak Yes
D11 Sunny Mild Normal Strong Yes
D12 Overcast Mild High Strong Yes
D13 Overcast Hot Normal Weak Yes
D14 Rain Mild High Strong No
Table 1
We need to find which attribute will be the root node in our decision tree. The gain is calculated for all four attributes:
Gain(S, Outlook) = 0.246
Gain(S, Temperature) = 0.029
Gain(S, Humidity) = 0.151
Gain(S, Wind) = 0.048 (calculated in example 2)
Outlook attribute has the highest gain, therefore it is used as the decision attribute in the root node.
Since Outlook has three possible values, the root node has three branches (sunny, overcast, rain). The next question is "what attribute should be tested at the Sunny branch node?" Since we=92ve used Outlook at the root, we only decide on the remaining three attributes: Humidity, Temperature, or Wind.
Ssunny = {D1, D2, D8, D9, D11} = 5 examples from table 1 with outlook = sunny
Gain(Ssunny, Humidity) = 0.970
Gain(Ssunny, Temperature) = 0.570
Gain(Ssunny, Wind) = 0.019
Humidity has the highest gain; therefore, it is used as the decision node. This process goes on until all data is classified perfectly or we run out of attributes.
The final decision = tree
The decision tree can also be expressed in rule format:
IF outlook = sunny AND humidity = high THEN playball = no
IF outlook = rain AND humidity = high THEN playball = no
IF outlook = rain AND wind = strong THEN playball = yes
IF outlook = overcast THEN playball = yes
IF outlook = rain AND wind = weak THEN playball = yes
ID3 has been incorporated in a number of commercial rule-induction packages. Some specific applications include medical diagnosis, credit risk assessment of loan applications, equipment malfunctions by their cause, classification of soybean diseases, and web search classification.
Conclusion
The discussion and examples given show that ID3 is easy to use. Its primary use is replacing the expert who would normally build a classification tree by hand. As industry has shown, ID3 has been effective.
这一篇讲讲ID3的原理
写这个算法是由于某同事的同学的毕业设计,关系够复杂的了==|||,写完这个算法,突然对数据挖掘有了兴趣,决定把C4.5,C5.0算法也一并实现,并且再研究一下数据挖掘的分类算法
其实这篇原理,没有我自己的内容。。。引用某人blog的东东吧(我本人倒是很反感抄袭的)
首先奉上blog作者:神威异度
虽然未曾与之交谈,不过经历千辛万苦的搜索之后,终于在他的blog发现了有价值的东西(这里要提一下,想要在国内搜索出有价值的东西真不容易,到处充斥着转载,小小鄙视一下我自己先),在这里万分感谢神威异度同学
奉上blog链接:http://www.blog.edu.cn/user2/huangbo929/archives/2006/1533249.shtml
再不厌其烦的把人家的东西copy到我这里。
决策树生成原理
Abstract
This paper details the ID3 classification algorithm. Very simply, ID3 builds a decision tree from a fixed set of examples. The resulting tree is used to classify future samples. The example has several attributes and belongs to a class (like yes or no). The leaf nodes of the decision tree contain the class name whereas a non-leaf node is a decision node. The decision node is an attribute test with each branch (to another decision tree) being a possible value of the attribute. ID3 uses information gain to help it decide which attribute goes into a decision node. The advantage of learning a decision tree is that a program, rather than a knowledge engineer, elicits knowledge from an expert.
Introduction
J. Ross Quinlan originally developed ID3 at the University of Sydney. He first presented ID3 in 1975 in a book, Machine Learning, vol. 1, no. 1. ID3 is based off the Concept Learning System (CLS) algorithm. The basic CLS algorithm over a set of training instances C:
Step 1: If all instances in C are positive, then create YES node and halt.
If all instances in C are negative, create a NO node and halt.
Otherwise select a feature, F with values v1, ..., vn and create a decision node.
Step 2: Partition the training instances in C into subsets C1, C2, ..., Cn according to the values of V.
Step 3: apply the algorithm recursively to each of the sets Ci.
Note, the trainer (the expert) decides which feature to select.
ID3 improves on CLS by adding a feature selection heuristic. ID3 searches through the attributes of the training instances and extracts the attribute that best separates the given examples. If the attribute perfectly classifies the training sets then ID3 stops; otherwise it recursively operates on the n (where n = number of possible values of an attribute) partitioned subsets to get their "best" attribute. The algorithm uses a greedy search, that is, it picks the best attribute and never looks back to reconsider earlier choices.
Discussion
ID3 is a nonincremental algorithm, meaning it derives its classes from a fixed set of training instances. An incremental algorithm revises the current concept definition, if necessary, with a new sample. The classes created by ID3 are inductive, that is, given a small set of training instances, the specific classes created by ID3 are expected to work for all future instances. The distribution of the unknowns must be the same as the test cases. Induction classes cannot be proven to work in every case since they may classify an infinite number of instances. Note that ID3 (or any inductive algorithm) may misclassify data.
Data Description
The sample data used by ID3 has certain requirements, which are:
Attribute-value description - the same attributes must describe each example and have a fixed number of values.
Predefined classes - an example's attributes must already be defined, that is, they are not learned by ID3.
Discrete classes - classes must be sharply delineated. Continuous classes broken up into vague categories such as a metal being "hard, quite hard, flexible, soft, quite soft" are suspect.
Sufficient examples - since inductive generalization is used (i.e. not provable) there must be enough test cases to distinguish valid patterns from chance occurrences.
Attribute Selection
How does ID3 decide which attribute is the best? A statistical property, called information gain, is used. Gain measures how well a given attribute separates training examples into targeted classes. The one with the highest information (information being the most useful for classification) is selected. In order to define gain, we first borrow an idea from information theory called entropy. Entropy measures the amount of information in an attribute.
Given a collection S of c outcomes
Entropy(S) = S -p(I) log2 p(I)
where p(I) is the proportion of S belonging to class I. S is over c. Log2 is log base 2.
Note that S is not an attribute but the entire sample set.
Example 1
If S is a collection of 14 examples with 9 YES and 5 NO examples then
Entropy(S) = - (9/14) Log2 (9/14) - (5/14) Log2 (5/14) = 0.940
Notice entropy is 0 if all members of S belong to the same class (the data is perfectly classified). The range of entropy is 0 ("perfectly classified") to 1 ("totally random").
Gain(S, A) is information gain of example set S on attribute A is defined as
Gain(S, A) = Entropy(S) - S ((|Sv| / |S|) * Entropy(Sv))
Where:
S is each value v of all possible values of attribute A
Sv = subset of S for which attribute A has value v
|Sv| = number of elements in Sv
|S| = number of elements in S
Example 2
Suppose S is a set of 14 examples in which one of the attributes is wind speed. The values of Wind can be Weak or Strong. The classification of these 14 examples are 9 YES and 5 NO. For attribute Wind, suppose there are 8 occurrences of Wind = Weak and 6 occurrences of Wind = Strong. For Wind = Weak, 6 of the examples are YES and 2 are NO. For Wind = Strong, 3 are YES and 3 are NO. Therefore
Gain(S,Wind)=Entropy(S)-(8/14)*Entropy(Sweak)-(6/14)*Entropy(Sstrong)
= 0.940 - (8/14)*0.811 - (6/14)*1.00
= 0.048
Entropy(Sweak) = - (6/8)*log2(6/8) - (2/8)*log2(2/8) = 0.811
Entropy(Sstrong) = - (3/6)*log2(3/6) - (3/6)*log2(3/6) = 1.00
For each attribute, the gain is calculated and the highest gain is used in the decision node.
Example of ID3
Suppose we want ID3 to decide whether the weather is amenable to playing baseball. Over the course of 2 weeks, data is collected to help ID3 build a decision tree (see table 1).
The target classification is "should we play baseball?" which can be yes or no.
The weather attributes are outlook, temperature, humidity, and wind speed. They can have the following values:
outlook = { sunny, overcast, rain }
temperature = {hot, mild, cool }
humidity = { high, normal }
wind = {weak, strong }
Examples of set S are:
Day Outlook Temperature Humidity Wind Play ball
D1 Sunny Hot High Weak No
D2 Sunny Hot High Strong No
D3 Overcast Hot High Weak Yes
D4 Rain Mild High Weak Yes
D5 Rain Cool Normal Weak Yes
D6 Rain Cool Normal Strong No
D7 Overcast Cool Normal Strong Yes
D8 Sunny Mild High Weak No
D9 Sunny Cool Normal Weak Yes
D10 Rain Mild Normal Weak Yes
D11 Sunny Mild Normal Strong Yes
D12 Overcast Mild High Strong Yes
D13 Overcast Hot Normal Weak Yes
D14 Rain Mild High Strong No
Table 1
We need to find which attribute will be the root node in our decision tree. The gain is calculated for all four attributes:
Gain(S, Outlook) = 0.246
Gain(S, Temperature) = 0.029
Gain(S, Humidity) = 0.151
Gain(S, Wind) = 0.048 (calculated in example 2)
Outlook attribute has the highest gain, therefore it is used as the decision attribute in the root node.
Since Outlook has three possible values, the root node has three branches (sunny, overcast, rain). The next question is "what attribute should be tested at the Sunny branch node?" Since we=92ve used Outlook at the root, we only decide on the remaining three attributes: Humidity, Temperature, or Wind.
Ssunny = {D1, D2, D8, D9, D11} = 5 examples from table 1 with outlook = sunny
Gain(Ssunny, Humidity) = 0.970
Gain(Ssunny, Temperature) = 0.570
Gain(Ssunny, Wind) = 0.019
Humidity has the highest gain; therefore, it is used as the decision node. This process goes on until all data is classified perfectly or we run out of attributes.
The final decision = tree
The decision tree can also be expressed in rule format:
IF outlook = sunny AND humidity = high THEN playball = no
IF outlook = rain AND humidity = high THEN playball = no
IF outlook = rain AND wind = strong THEN playball = yes
IF outlook = overcast THEN playball = yes
IF outlook = rain AND wind = weak THEN playball = yes
ID3 has been incorporated in a number of commercial rule-induction packages. Some specific applications include medical diagnosis, credit risk assessment of loan applications, equipment malfunctions by their cause, classification of soybean diseases, and web search classification.
Conclusion
The discussion and examples given show that ID3 is easy to use. Its primary use is replacing the expert who would normally build a classification tree by hand. As industry has shown, ID3 has been effective.
发表评论
-
庞果英雄会 覆盖数字
2013-11-14 15:13 817庞果覆盖数字原题如下 给定整数区间[a,b]和整数区间[x, ... -
2-3 tree
2013-10-09 17:10 706package com.leon.cc; imp ... -
编译原理生成LL1预测分析表
2013-08-11 20:47 5795package com.leon; impo ... -
应用倍增法后缀数组以及RMQ求解N个字符串最长公共子串问题
2011-11-30 16:35 1464/** * @see IOI2009国家集训队论文《后 ... -
谷哥的KOF连招问题
2010-10-09 14:38 1464传说问题是这样的 玩过KOF(拳皇)的人都知道,玩的时候会连招 ... -
KOF
2010-10-09 00:13 0package org.struct.trietree; ... -
ACM/ICPC HDU 1195
2010-09-06 10:37 1854本年度还有8篇博客需要完成 开篇前附加一个看完《盗梦空间》的我 ... -
答复: 阿里巴巴面试感言
2009-10-09 22:27 2141好吧,我承认我闲的蛋疼 问题:3000万条的记录取最大的前50 ... -
正向最大匹配改进算法
2009-05-26 22:11 5803AD.: 2年J2EE经验,熟悉常用数据结构算法,熟悉常 ... -
决策树C4.5算法
2009-05-19 02:05 5206数据挖掘中决策树C4.5预测算法实现(半成品,还要写规则后 ... -
区间树
2008-07-18 15:47 2137package acmcode; /** * ... -
红黑树初版
2008-07-16 17:20 1526package acmcode; /** * R ... -
四则运算的中缀转后缀,逆波兰表达式求值
2008-04-23 23:10 8999首先描述问题 给定一 ... -
最大0,1子矩阵
2008-04-20 21:16 5983首先描述一下问题 /** * * 时间限制(普 ... -
决策树ID3算法
2008-04-01 22:18 7544算了,还是自己修正一个BUG.... package gr ... -
ext2.0 的XMLWriter
2008-02-20 21:04 1244做ext相关的一个example项目,把我们的客户端移植成ex ... -
树与哈夫曼树
2008-02-20 20:50 1539package tree; public ... -
LCS与图算法
2008-02-20 20:46 1190求两个字符串最长公共子串的问题。大体解法是用一个矩阵来记录两个 ... -
《程序员》算法擂台:骑士聚会
2008-02-20 20:40 1170在8×8的棋盘上分布着n个骑士,他们想约在某一个格中聚会。骑士 ...
相关推荐
大学 ppt 数据挖掘 决策树 原理 算法 id3 迭代二元树 3代
决策树中ID3算法与C4_5算法分析与比较
C++写的决策树ID3算法,对于初学者来说是很好的程序,可以学到很多,了解算法的原理所在。
数据分析挖掘实验报告(可进博客主页查看目录以及报告样式) 四个实验21面,帮助你学习参考使用,帮助你取得更好成绩 报告地址:数据分析挖掘实验报告...2、熟悉ID3决策树分类算法的算法原理 ID3决策树分类算法基于信息
Classification DataMining_ID3 ID3-决策树分类算法 Classification DataMining_KNN KNN-k最近邻算法工具类 Classification DataMining_NaiveBayes NaiveBayes-朴素贝叶斯算法 Clustering DataMining_BIRCH BIRCH-...
决策树算法根据数据的属性采用树状结构建立决策模型, 决策树模型常用来解决分类和回归问题。常见的算法包括:分类及回归树(Classifica tion And Regression Tree, CART), ID3 (Iterative Dichotomiser 3), C...
详细讲解决策树(ID3,C4.5,CART)的数学推导过程,能够使用原生代码完成决策树代码的编写。能够调用sklearn库完成决策树代码的编写。能够可视化生成的决策树。能够使用决策树完成鸢尾花数据分类任务。
决策树算法是一种重要的数据挖掘方法,ID3算法是最具影响的一种决策树生成算法。介绍了粗集理论的相关概念和传统的ID3算法基本原理,提出了一种以粗集论中的属性依赖度为基础的ID3改进算法,克服了传统ID3算法对取值较...
MachineLearningNote ...python机器学习笔记:深入学习决策树算法原理 地址: python机器学习笔记:ID3决策树算法实战 地址: 3,K-NearestNeighbor(KNN) 关于K近邻文件夹中的代码和数据,详情请参考博
3 算法基本原理 (1)决策树C4.5 C4.5是一系列用在机器学习和数据挖掘的分类问题中的算法。它的目标是监督学习: 给定一个数据集,其中的每一个元组都能用一组属性值来描述,每一个元组属于一个互 斥的类别中的某一...
3 算法基本原理 (1)决策树C4.5 C4.5是一系列用在机器学习和数据挖掘的分类问题中的算法。它的目标是监督学习: 给定一个数据集,其中的每一个元组都能用一组属性值来描述,每一个元组属于一个互 斥的类别中的某一...