machine learning - Adaboost with neural networks -


i implemented adaboost project, i'm not sure if i've understood adaboost correctly. here's implemented, please let me know if correct interpretation.

  1. my weak classifiers 8 different neural networks. each of these predict around 70% accuracy after full training.
  2. i train these networks fully, , collect predictions on training set ; have 8 vectors of predictions on training set.

now use adaboost. interpretation of adaboost find final classifier weighted average of classifiers have trained above, , role find these weights. so, every training example have 8 predictions, , i'm combining them using adaboost weights. note interpretation, weak classifiers not retrained during adaboost iterations, weights updated. updated weights in effect create new classifiers in each iteration.

here's pseudo code:

all_alphas = []  all_classifier_indices = [] initialize training example weights 1/(num of examples) compute error 8 networks on training set in 1 t:       find classifier lowest weighted error.       compute weights (alpha) according adaboost confidence formula       update weight distribution, according weight update formula in adaboost.       all_alphas.append(alpha)        all_classifier_indices.append(selected_classifier) 

after t iterations, there t alphas , t classifier indices ; these t classifier indices point 1 of 8 neural net prediction vectors.

then on test set, every example, predict summing on alpha*classifier .

i want use adaboost neural networks, think i've misinterpreted adaboost algorithm wrong..

boosting summary:

1- train first weak classifier using training data

2- 1st trained classifier makes mistake on samples , correctly classifies others. increase weight of wrongly classified samples , decrease weight of correct ones. retrain classifier these weights 2nd classifier.

in case, first have resample replacement data these updated weights, create new training data , train classifier on these new data.

3- repeat 2nd step t times , @ end of each round, calculate alpha weight classifier according formula. 4- final classifier weighted sum of decisions of t classifiers.

it clear explanation have done abit wrongly. instead of retrain network new data set, trained them on original dataset. in fact kind of using random forest type classifier (except using nn instead of decision trees) ensemble.

ps: there no guarantee boosting increases accuracy. in fact, far boosting methods i'm aware of unsuccessful improve accuracy nn weak learners (the reason because of way boosting works , needs lengthier discussion).


Comments

Popular posts from this blog

c# - How Configure Devart dotConnect for SQLite Code First? -

java - Copying object fields -

c++ - Clear the memory after returning a vector in a function -