TRANSFORMATION BASED LEARNING

What is Transformation-Based Learning?

Transformation-based learning (TBL) is a rule-based algorithm for automatic tagging of parts-of-speech to the given text. TBL transforms one state to another using transformation rules in order to find the suitable tag for each word. TBL allows us to have linguistic knowledge in a readable form. It extracts linguistic information automatically from corpora. The outcome of TBL is an ordered sequence of transformations of the form as shown below.

Tagi -> Tagj in context C

A typical transformation-based learner has an initial state annotator, a set of transformations and an objective function.

INITIAL ANNOTATOR

It is a program to assign tags to each and every word in the given text. It may be one that assigns tags randomly or a Morkov model tagger. Usually it assigns every word its most likely tag as indicated in the training corpus. For example, walk would be initially labelled as a verb.

TRANSFORMATIONS

The learner is given allowable transformation types. A tag may change from X to Y if the previous word is W, the previous tag is ti and the following tag is tj, or the tag two before is ti and the following word is W. Consider the following setence,

The rabbit runs.

A typic TBL tagger (or Brill Tagger) can easily identify that rabbit is noun if it is given the rule,

If the previous tag is an article and the following tag is a verb.

HOW TRANSFORMATION BASED LEARNING WORKS?

Transformation based learning usually starts with some simple solution to the problem. Then it runs through cycles. At each cycle, the transformation which gives more benefit is chosen and applied to the problem. The algorithm stops when the selected transformations do not add more value or there are no more transforamtions to be selected. This is like painting a wall with background color first, then paint different color in each block as per its shape or so.TBL is best suitable for classification tasks.

In TBL, accuracy is generally considered as the objective function. So in each training cycle, the tagger finds the transformations that greatly reduce the errors in the training set. This transormation is then added to the transformation list and applied to the training corpus. At the end of the training, the tagger is run by first tagging the fresh text with initial-state annotator, then applying each transformation in order wherever it can apply.

ADVANTAGES OF TRANSFORMATION BASED LEARNING

  1. Small set of simple rules that are sufficient for tagging is learned.
  2. As the learned rules are easy to understand development and debugging are made easier.
  3. Interlacing of machine-learned and human-generated rules reduce the complexity in tagging.
  4. Transformation list can be compiled into finite-state machine resulting in a very fast tagger. A TBL tagger can be even ten times faster than the fastest Morkov-model tagger.
  5. TBL is less rigid in what cues it uses to disambiguate a particular word. Still it can choose appropriate cues.

DISADVANTAGES OF TRANSFORMATION BASED LEARNING

  1. TBL does not provide tag probabilities.
  2. Training time is often intorelably long, especially on the large corpora which are very common in Natural Language Processing.