1. 程式人生 > >Feature Selection to Improve Accuracy and Decrease Training Time

Feature Selection to Improve Accuracy and Decrease Training Time

Working on a problem, you are always looking to get the most out of the data that you have available. You want the best accuracy you can get.

Typically, the biggest wins are in better understanding the problem you are solving. This is why I stress you spend so much time up front

defining your problem, analyzing the data, and preparing datasets for your models.

A key part of data preparation is creating transforms of the dataset such as rescaled attribute values and attributes decomposed into their constituent parts, all with the intention of exposing more and useful structure to the modeling algorithms.

Feature Selection

Carefully choose features in your dataset
Photo by Gabe Photos, some rights reserved

An important suite of methods to employ when preparing the dataset are automatic feature selection algorithms. In this post you will discover feature selection, the benefits of simple feature selection and how to make best use of these algorithms in Weka on your dataset.

Need more help with Weka for Machine Learning?

Take my free 14-day email course and discover how to use the platform step-by-step.

Click to sign-up and also get a free PDF Ebook version of the course.

Not All Attributes Are Equal

Whether you select and gather sample data yourself or whether it is provided to you by domain experts, the selection of attributes is critically important. It is important because it can mean the difference between successfully and meaningfully modeling the problem and not.

Misleading

Including redundant attributes can be misleading to modeling algorithms. Instance-based methods such as k-nearest neighbor use small neighborhoods in the attribute space to determine classification and regression predictions. These predictions can be greatly skewed by redundant attributes.

Overfitting

Keeping irrelevant attributes in your dataset can result in overfitting. Decision tree algorithms like C4.5 seek to make optimal spits in attribute values. Those attributes that are more correlated with the prediction are split on first. Deeper in the tree less relevant and irrelevant attributes are used to make prediction decisions that may only be beneficial by chance in the training dataset. This overfitting of the training data can negatively affect the modeling power of the method and cripple the predictive accuracy.

It is important to remove redundant and irrelevant attributes from your dataset before evaluating algorithms. This task should be tackled in the Prepare Data step of the applied machine learning process.

Feature Selection

Feature Selection or attribute selection is a process by which you automatically search for the best subset of attributes in your dataset. The notion of “best” is relative to the problem you are trying to solve, but typically means highest accuracy.

A useful way to think about the problem of selecting attributes is a state-space search. The search space is discrete and consists of all possible combinations of attributes you could choose from the dataset. The objective is to navigate through the search space and locate the best or a good enough combination that improves performance over selecting all attributes.

Three key benefits of performing feature selection on your data are:

  • Reduces Overfitting: Less redundant data means less opportunity to make decisions based on noise.
  • Improves Accuracy: Less misleading data means modeling accuracy improves.
  • Reduces Training Time: Less data means that algorithms train faster.

Attribute Selection in Weka

Weka provides an attribute selection tool. The process is separated into two parts:

  • Attribute Evaluator: Method by which attribute subsets are assessed.
  • Search Method: Method by which the space of possible subsets is searched.

Attribute Evaluator

The Attribute Evaluator is the method by which a subset of attributes are assessed. For example, they may be assessed by building a model and evaluating the accuracy of the model.

Some examples of attribute evaluation methods are:

  • CfsSubsetEval: Values subsets that correlate highly with the class value and low correlation with each other.
  • ClassifierSubsetEval: Assesses subsets using a predictive algorithm and another dataset that you specify.
  • WrapperSubsetEval: Assess subsets using a classifier that you specify and n-fold cross validation.

Search Method

The Search Method is the is the structured way in which the search space of possible attribute subsets is navigated based on the subset evaluation. Baseline methods include Random Search and Exhaustive Search, although graph search algorithms are popular such as Best First Search.

Some examples of attribute evaluation methods are:

  • Exhaustive: Tests all combinations of attributes.
  • BestFirst: Uses a best-first search strategy to navigate attribute subsets.
  • GreedyStepWise: Uses a forward (additive) or backward (subtractive) step-wise strategy to navigate attribute subsets.

How to Use Attribute Selection in Weka

In this section I want to share with you three clever ways of using attribute selection in Weka.

1. Explore Attribute Selection

When you are just stating out with attribute selection I recommend playing with a few of the methods in the Weka Explorer.

Load your dataset and click the “Select attributes” tab. Try out different Attribute Evaluators and Search Methods on your dataset and review the results in the output window.

Feature Selection Methods in the Weka Explorer

Feature Selection Methods in the Weka Explorer

The idea is to get a feeling and build up an intuition for 1) how many and 2) which attributes are selected for your problem. You could use this information going forward into either or both of the next steps.

2. Prepare Data with Attribute Selection

The next step would be to use attribute selection as part of your data preparation step.

There is a filter you can use when preprocessing your dataset that will run an attribute selection scheme then trim your dataset to only the selected attributes. The filter is called “AttributeSelection” under the Unsupervised Attribute filters.

Creating Transforms of a Dataset using Feature Selection methods in Weka

Creating Transforms of a Dataset using Feature Selection methods in Weka

You can then save the dataset for use in experiments when spot checking algorithms.

3. Run Algorithms with Attribute Selection

Finally, there is one more clever way you can incorporate attribute selection and that is to incorporate it with the algorithm directly.

There is a meta algorithm you can run and include in experiments that selects attributes running the algorithm. The algorithm is called “AttributeSelectedClassifier” under the “meta” group of algorithms. You can configure this algorithm to use your algorithm of choice as well as the Attribute Evaluator and Search Method of your choosing.

Coupling a Classifier and Attribute Selection in a Meta Algorithm in Weka

Coupling a Classifier and Attribute Selection in a Meta Algorithm in Weka

You can include multiple versions of this meta algorithm configured with different variations and configurations of the attribute selection scheme and see how they compare to each other.

Summary

In this post you discovered feature selection as a suite of methods that can increase model accuracy, decrease model training time and reduce overfitting.

You also discovered that feature selection methods are built into Weka and you learned three clever ways for using feature selection methods on your dataset in Weka, namely by exploring, preparing data and in coupling it with your algorithm in a meta classifier.

Wikipedia has a good entry on Feature Selection.

If you are looking for the next step, I recommend the book Feature Extraction: Foundations and Applications. It is a collection of articles by academics covering a range of issues on and realted to feature selection. It’s pricy but well worth it because of the difference the methods it can make on solving your problem.

For an updated perspective on feature selection with Weka see the post:


Want Machine Learning Without The Code?

Master Machine Learning With Weka

Develop Your Own Models in Minutes

…with just a few a few clicks

Covers self-study tutorials and end-to-end projects like:
Loading data, visualization, build models, tuning, and much more…

Finally Bring The Machine Learning To
Your Own Projects

Skip the Academics. Just Results.


相關推薦

Feature Selection to Improve Accuracy and Decrease Training Time

Tweet Share Share Google Plus Working on a problem, you are always looking to get the most out o

New Survey Reveals Insurance Companies Missing Opportunity to Improve Loyalty and Reduce Churn Markets Insider

Pegasystems Inc. (NASDAQ: PEGA), the software company empowering customer engagement at the world's leading enterprises, today announced the results of a n

'Cloud computing' takes on new meaning for scientists: Researchers use machine learning to improve accuracy of climate predictio

Their work is detailed in a study published online recently by Proceedings of the National Academy of Sciences. "Clouds play a major role in the Earth's c

【機器學習】Feature selection – Part II: linear models and regularization

Selecting good features – Part II: linear models and regularization 在我之前的文章中,我討論了單變數特徵選擇,其中每個特徵都是根據響應變數獨立評估的。另一種流行的方法是利用機器學習模型進行特徵排序。許多機器學習模型要麼具有一

Combining Lexical and Grammatical Features to Improve Readability Measures for First and Second Language Texts.-paper

http://www.aclweb.org/anthology/N07-1058 Volume:Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Comp

To improve anesthesia, focus on neuroscience and nociception, experts urge

People sometimes mistakenly think of general anesthesia as just a really deep sleep, but in fact, anesthesia is really four brain states — unconsciousness,

Apache Spark sets out to standardize distributed machine learning training, execution, and deployment

We called it Machine Learning October Fest. Last week saw the nearly synchronized breakout of a number of news centered around machine learning (ML): The r

AI and machine learning needed to improve the healthcare industry

Of all the industries set to be impacted by artificial intelligence and machine learning, the healthcare sector stands to gain the most. Often hindered by

Using machine learning and optimization to improve refugee integration

IMAGE: Andrew Trapp, an associate professor in the Foisie Business School at Worcester Polytechnic Institute (WPI), and PhD student Narges Ahani are workin

Want to Improve Your Memory? Science Tells Us the Key (and It Can Actually Be Fun)

Want to Improve Your Memory? Science Tells Us the Key (and It Can Actually Be Fun)Do you remember where you were when you had your first kiss?It’s funny, t

Artificial intelligence to improve drug combination design and personalized medicine

It is now evident that complex diseases, such as cancer, often require effective drug combinations to make any significant therapeutic impact. As the drug

R programming for feature selection and regression

data introduction Select packages Split dataset feature selection tune parameters prediciton 1. data introduction 我的資料包含

Why context.Value matters and how to improve it

tl;dr: I think context.Value solves the important use case of writing stateless - and thus scalable - abstractions. I believe dynamic scoping could provid

check whether the subset(no need to be consective) and be sum of X

subsets [] bool else true ret code art size #include "stdafx.h" bool isSubsetSum(int set[], int n, int sum) { bool sumOfSubSet[100][1

Set VM RDM disk to Round Bobin and set IOPS path to 1

iops rdm KB Related to IOPS settingAdjusting Round Robin IOPS limit from default 1000 to 1 (2069356)https://kb.vmware.com/selfservice/microsites/search

[中英對照]Introduction to DPDK: Architecture and Principles

under cloud 另一個 times environ mov ket 路由 進一步 Introduction to DPDK: Architecture and Principles | DPDK概論:體系結構與實現原理 Linux network stack p

10 ways to improve your programming skills

programming skillshttp://rudyn.is-programmer.com/ 1. Learn a newprogramming language學習一門新的編程語言 Learning new programming languages will expose you to new wa

視頻顯著性檢測-----Predicting Video Saliency using Object-to-Motion CNN and Two-layer Convolutional LSTM

layer lin -- 分享圖片 組合 object idt red 9.png 幀內顯著性檢測: 將卷積網絡的多層特征進行組合通過unsampling 得到粗顯著性預測; 幀間顯著性檢測: (粗檢測結果+新卷積網絡的特征圖,最後+之前卷積網絡的卷積特征輸入到LST

關於eclipse出現The selection cannot be launched,and there are no recent launches

語法 選擇 wid 沒有 讀者 AR java語法 not lec 當出現這個問題的時候,應分為兩種情況,第一種為當你要運行的文件無main函數時,第二種為你要運行的類有main函數時 兩種問題總的解決方法就是先配置運行。也可能是程序主函數的問題。 對第一種情況:你找到

[Preference] How to avoid Forced Synchronous Layout or FSL to improve site preference

div wid strong amp blog src hose mov sync When tigger site updates the layout, it always follow this order: Javascript trigger style