Posts Tagged ‘POS Tagging’

PostHeaderIcon Baum Welch Algorithm

Baum-Welch Algorithm

Baum-Welch Algorithm, also known as forward-backword algorithm was invented by Leonard E. Baum and Lloyd R Welch. It is a special case of Estimation Maximization (EM) method. Baum-Welch algorithm is very effective to train a Markov model without using manually annotated corpora.

Baum Welch algorithm works by assigning initial probabilities to all the parameters. Then until the training converges, it adjusts the probabilities of the HMM’s parameters so as to increase the probability the model assigns to the training set.

Maximization Process

If no prior information is available then the parameters will be assigned some random probabilities. In case domain knowledge is available, an informed intial guess will be made for the parameter values.

Once the initial values are assigned to the parameters, the algorithm enters a loop for training. In each iteration, based on the tags and corresponding probabilities that the current model assigns, probabilities are estimated. That is the parameter values are adjusted in each iteration. Training stops when the increase in the probability of the training set between iterations falls below some small value.

Forward-backward algorithm guarantees to find a locally best set of values from the initial parameter values. It works well if small amount of manually tagged corpus given.

PostHeaderIcon Maximum Entropy

Maximum Entropy Tagging

Maximum Entropy Tagging aims to find a model with maximum entropy. The term, maximum entropy here means maximum randomness or minimum additional structure. It exploits some of the good properties of tranformation-based learning and Markov model tagging. It allows flexibility in cues used to disambiguate words. The outputs of the maximum entropy tagging are tags and their probabilities.

The maximum entropy framework finds a single probability model consistent with the constraints of the training data and maximally agnostic beyond what the training data indicates. The probability model is taken over a space H * T, where H is the set of environments in which a word appears and T is the set of possible POS tags. Maximum entropy model specifies a set of features from the environment for tag prediction. The features remind us transformation rules in transformation based learning.

A typical environment is specified as

hi = {wi, wi+1, wi+2, wi-1, wi-2, ti-1, ti-2}

where h stands for environment, w for word and t for tag, and i for index. The above equation is for the ith word , wi whose preceding two words are wi-1 and wi-2 and the succeeding two words are wi+1 and wi+2, and the previous two tags are ti-1 and ti-2.

Given the environment, a set of binary features can be defined. Following is the jth feature and is on or off based on environment properties.

fj(hi, ti) = {1 if suffix(wi) = ing and ti = PastPartVerb
{0 otherwise

That is, the feature mentioned above will be on (i.e, 1) if the suffix of the word in question is ing and the tag is past participle and will be off (i.e, 0) if not.

Features are generated from feature templates. For the above feature, the template is

X is a suffix of wi, |X| < 5 AND ti = T

where X and T are variables

A set of features and their observed probabilities are extracted from the training set. Generalized Iterative scaling method is then used to create the maximum entropy model consistent with the observed feature probabilities. Now we get the model trained.

Like Markov model tagging, most probable tag sequence according to the probability model is built. Beam search is used for this purpose, keeping n most likely tag sequences up to the word being tagged. Unlike Markov model approach, there is a great deal of flexibility in what contextual cues can be used.

Maximum entropy method is powerful enough to achieve the accuracy in POS tagging tasks.

Parts Of Speech Tagging

PostHeaderIcon Transformation Based Learning

What is Transformation-Based Learning?

Transformation-based learning (TBL) is a rule-based algorithm for automatic tagging of parts-of-speech to the given text. TBL transforms one state to another using transformation rules in order to find the suitable tag for each word. TBL allows us to have linguistic knowledge in a readable form. It extracts linguistic information automatically from corpora. The outcome of TBL is an ordered sequence of transformations of the form as shown below.

Tagi -> Tagj in context C

A typical transformation-based learner has an initial state annotator, a set of transformations and an objective function.

Initial Annotator

It is a program to assign tags to each and every word in the given text. It may be one that assigns tags randomly or a Morkov model tagger. Usually it assigns every word its most likely tag as indicated in the training corpus. For example, walk would be initially labelled as a verb.

Transformations

The learner is given allowable transformation types. A tag may change from X to Y if the previous word is W, the previous tag is ti and the following tag is tj, or the tag two before is ti and the following word is W. Consider the following setence,

The rabbit runs.

A typic TBL tagger (or Brill Tagger) can easily identify that rabbit is noun if it is given the rule,

If the previous tag is an article and the following tag is a verb.

How Transformation based learning works?

Transformation based learning usually starts with some simple solution to the problem. Then it runs through cycles. At each cycle, the transformation which gives more benefit is chosen and applied to the problem. The algorithm stops when the selected transformations do not add more value or there are no more transforamtions to be selected. This is like painting a wall with background color first, then paint different color in each block as per its shape or so.TBL is best suitable for classification tasks.

In TBL, accuracy is generally considered as the objective function. So in each training cycle, the tagger finds the transformations that greatly reduce the errors in the training set. This transormation is then added to the transformation list and applied to the training corpus. At the end of the training, the tagger is run by first tagging the fresh text with initial-state annotator, then applying each transformation in order wherever it can apply.

Advantages of Transformation Based Learning

  1. Small set of simple rules that are sufficient for tagging is learned.
  2. As the learned rules are easy to understand development and debugging are made easier.
  3. Interlacing of machine-learned and human-generated rules reduce the complexity in tagging.
  4. Transformation list can be compiled into finite-state machine resulting in a very fast tagger. A TBL tagger can be even ten times faster than the fastest Morkov-model tagger.
  5. TBL is less rigid in what cues it uses to disambiguate a particular word. Still it can choose appropriate cues.

Disadvantages of Transformation Based Learning

  1. TBL does not provide tag probabilities.
  2. Training time is often intorelably long, especially on the large corpora which are very common in Natural Language Processing.

BACK: POS Tagging

For further study

Transformation-Based Learning

Trnasformation based learning in the fast lane

Multidimensional transformation-based learning

PostHeaderIcon Markov Models

Markov Models: Overview

Morkov models extract linguistic knowledge automatically from the large corpora and do POS tagging. Morkov models are alternatives for laborious and time-consuming manual tagging.

Markov Property

The name Markov model is derived from the term Markov property. Markov property is an assumption that allows the system to be analyzed. According to Markov property, given the current state of the system, the future evolution of the system is independent of its past. The Markov property is assured if the transition probabilities are given by exponential distributions with constant failure or repair rates.

Assume that there is a sequence of random variables. And value of each variable depends on the previous elements in the sequence. In most of the cases, value of the present variable is sufficient to predict the next random variable. That is, future elements only depend on the present one and not on the past elements.

What is Markov Model?

A Markov model is nothing but a finite-state machine. Each state has two probability distribution: the probability of emitting a symbol and probability of moving to a particular state. From one state, the Markov model emits a symbol and then moves to another state.

The objective of Markov model is to find optimal sequence of tags T = {t1, t2, t3,…tn} for the word sequence W = {w1,w2,w3,…wn}. That is to find the most probable tag sequence for a word sequence.

If we assume the probability of a tag depends only on one previous tag then the model developed is called bigram model. Each state in the bigram model corresponds to a POS tag. The probability of moving from one POS state to another can be represented as P(ti|tj). The probability of word being emitted from a particular tag state can be represented as P(wi|tj). Assume that the sentence, “The rabbit runs” is to be tagged. Obviously, the word, The is determiner so can be annotated with tag, say Det, rabbit is noun so the tag can be N, and runs is a verb so the tag can be V. So we get the tagged sentence as

The|Det rabbit|V runs|V

Given this model, P(Det N V | The rabbit runs) is estimated as

P(Det | START) * P(N | Det) * P(V | N) * P(The | Det) * P(rabbit | N) * P(runs | V)

This is how to derive probabilities required for the Morkov model.

Hidden Markov Models (HMM)

Hidden Markov Models (HMM) are so called because the state transitions are not observable. HMM taggers require only a lexicon and untagged text for training a tagger. Hidden Markov Models aim to make a language model automatically with little effort. Disambiguation is done by assigning more probable tag. For example, the word help will be tagged as noun rather than verb if it comes after an article. This is because the probability of noun is much more than verb in this context.

In an HMM, we know only the probabilistic function of the state sequence. In the beginning of tagging process, some initial tag probabilities are assigned to the HMM. Then in each training cycle, this initial setting is refined using the Baum-Welch re-estimation algorithm.

NEXT: Maximum Entropy Method

Parts-Of-Speech Tagging

PostHeaderIcon Rule Based POS Tagging

Rule-based Parts-Of-Speech Tagging

Rule-based part-of-speech tagging is the oldest approach that uses hand-written rules for tagging. Rule based taggers depends on dictionary or lexicon to get possible tags for each word to be tagged. Hand-written rules are used to identify the correct tag when a word has more than one possible tag. Disambiguation is done by analysing the linguistic features of the word, its preceding word, its following word and other aspects. For example, if the preceding word is article then the word in question must be noun. This information is coded in the form of rules.

The rules may be context-pattern rules or as regular expressions compiled into finite-state automata that are intersected with lexically ambiguous sentence representations. TAGGIT, the first large rule based tagger, used context-pattern rules. TAGGIT used a set of 71 tags and 3300 disambiguation rules. These rules disambiguated 77% of words in the million-word Brown University corpus.

NEXT: Markov Models