In this session, you learnt the fundamentals of syntactic parsing and POS tagging.

You learnt that there are three broad levels of syntactic processing – POS tagging, constituency parsing, and dependency parsing.

POS tagging is a crucial task in syntactic processing and is used as a **preprocessing step **in many NLP applications.

You learnt the following various approaches to POS tagging:

- Lexicon-based

- Rule-based

- Stochastic models such as HMMs.

- Deep-learning, RNN-based models.

You studied **Markov processes** and **HMMs** for POS tagging. You learnt that a Markov process is used to model sequential phenomena and that the **first-order Markov assumption **states that the probability of an event (or state) depends only on the previous state.

The **Hidden Markov Model** is used to model phenomena where the **states are hidden** and they **emit observations**. The **transition and the emission probabilities** specify the probabilities of transition between states and emission of observations from states, respectively. In POS tagging, the states are the POS tags while the words are the observations.

You also learnt that after **learning the HMM model parameters, **i.e. computing the transition and the emission probabilities, the **Viterbi algorithm** is used to assign POS tags efficiently, which would otherwise have been a computationally expensive problem. The Viterbi algorithm, although greedy in nature, is a **computationally viable **alternative to choosing a tag sequence from a large set of possible sequences.

You also learnt to build your own HMM-based POS tagger and implement the Viterbi algorithm using the Penn Treebank training corpus.

In the next session on **constituency parsing**, you’ll move a step higher and use POS tags to parse sentences.