Baum-Welch Algorithm, also known as forward-backword algorithm was invented by Leonard E. Baum and Lloyd R Welch. It is a special case of Estimation Maximization (EM) method. Baum-Welch algorithm is very effective to train a Markov model without using manually annotated corpora.
Baum Welch algorithm works by assigning initial probabilities to all the parameters. Then until the training converges, it adjusts the probabilities of the HMM’s parameters so as to increase the probability the model assigns to the training set.
If no prior information is available then the parameters will be assigned some random probabilities. In case domain knowledge is available, an informed intial guess will be made for the parameter values.
Once the initial values are assigned to the parameters, the algorithm enters a loop for training. In each iteration, based on the tags and corresponding probabilities that the current model assigns, probabilities are estimated. That is the parameter values are adjusted in each iteration. Training stops when the increase in the probability of the training set between iterations falls below some small value.
Forward-backward algorithm guarantees to find a locally best set of values from the initial parameter values. It works well if small amount of manually tagged corpus given.