HMMpack presents flexible and simple-to-use tools for designing and computing main aspects of “Hidden Markov Model”. The hidden markov model makes a mathematical framework to formulation, study and find out some informatics essence of meaningful sequences, such as natural language sentences, sequence of voice signals, and specially DNA and protein sequences.
In hidden markov model also “markov chains”, we take only one but big assumption that says the probability of consequence outcomes just depends on some few previous outcomes. In hidden markov chain, so could be guess, the chain of outcomes is hidden but each state of the chain probabilistically emits an observable feature.
A famous example is “Occasionally Dishonest Casino” where two dices, one fair another loaded, roll while we don’t know which one is rolled but we can see the side outcome. Two dices are randomly chosen but depend on what previously rolled. Type of dices makes the state set and numbers one to six make emission set.
By path we mean a selection of states to be assigned to consequence positions of the chain. The most probable path is the assignment of chain states such that the most feasible probability would be related to a given observation of emissions. The problem finding the most probable path can be solved by “Viterbi’s Algorithm”.
There are two other algorithms, “Forward Algorithm” and “Backward Algorithm” calculating total probability corresponded to the sequence of emissions over all possible chain of states. Through these two algorithms, intermediated variables are introduced by which “Posterior Probability” could be calculated, too. Posterior probability shows the occurring probability of each hidden state at each position along the chain when the sequence of emissions was given. It is conventional to imply posterior probability to get an assignment of states to the chain as a prediction of hidden states. Two most common methods to do such assignments are posterior decoding by “max probability” and “weighted sum”.
For defining a hidden markov model one should prepared two matrices as parameters of the model. One is transition matrix containing of probabilities of transition each state to the others. Other parameters are probabilities of emission each symbols from each state which form emission matrix. Note that you have to take a start state, usually silent because it doesn’t emit anything, to get a well designed HMM. Considering the machine limits on calculation with too small numbers, that is most common to use logarithms of all values rather their own selves to avoid numerical underflow at successive multiplication and numerical overflow at successive division.
[ Click here to start download ]
To install the package, download the source code, unpack it, and then compile the source.
$ tar xvzf HMMpack.tar.gz
$ cd HMMpack/src
$ g++ HMMpack.cpp bioinf.cpp MTARand.cpp SampleCase.cpp -o hm
In the last command, one the followin sample cases should be replace with SampleCase.cpp,
The format of the parameter entry is as follows,
<use log-space 0|1 >
<size of state set, including start state and other silent states>
<size of alphabet for emission symbols>
<elements of transition matrix, space separates columns and new line separates rows>
<elements of emission matrix, space separates columns and new line separates rows>
The program provides six functions as follows,
A usual application of these functions is as follows, the program takes HMM’s parameters and a list of observed emissions, then yields two results as prediction. The first shows the result of Viterbi’s algorithm and the second shows prediction by “Posterior Decoding by the Max Probability” which implicitly invoke “Forward and Backward Algorithms”.
Each output consists of three lines. The first line shows the sequence of observed emissions. The second line shows the chain of now predicted hidden states. In the third line, we introduce a simple measure of the level of significant of prediction at each position using posterior probability. Lower figures signify less confidence on probabilistic inference at the correspond position. A figure 5 or greater ensures a satisfactory prediction.
There has been prepared a sample input file for the program which is containing “Occasionally Dishonest Casino” as explained in the book. You can test the program by such a following command,
$ hm < odc_sample.txt
class HMM { private: int n; // number of states int m; // size of set of symbols (cardinal of the alphabet) double **a; // transition matrix double **e; // emission matrix bool logSp; // true if calculation do in Log-space int L; // length of observed sequence double **f; // dynamic table in forward alg double **b; // dynamic table in backward alg double P; // total probability public: friend istream & operator>> ( istream &, HMM & ); HMM(){ L=0; } HMM(int N,int M,double **A, double **E, bool logSpace=false); ~HMM( ); bool isLogSpace(){ return logSp; } void setLogSpace(bool logSpace=true); double jointProb(int L, int *x, int *path); double viterbi(int L, int *x, int *&mostProbablePath); double forward_backward(int L, int *x); double posterior(int i, int k); void postDecodingMax(int *&predictedPath); void postDecodingWeightedSum(double *g, double *&p); private: void freefb(); };
operator >> (istream &, HMM &)
Read HMM’s parameters from an istream into the HMM object.
double jointProb(int L, int *x, int *path)
Calculate P(x,π)
double viterbi(int L, int *x, int *&mostProbablePath)
Compute the most probable path based on given sequence of observations.
double forward_backward(int L, int *x)
Compute forward and backward dynamic tables.
double posterior(int i, int k)
Calculate P(πi = k | x). Note that this function must be invoked after forward_backward to get the correct result.
void postDecodingMax(int *&predictedPath)
Decide to choose the sate for each position on the chain that corresponded to maximum posterior probability. Note that this function must be invoked after forward_backward to get the correct result.
void postDecodingWeightedSum(double *g, double *&p)
Calculate weighted sum of posterior probabilities for all positions on the chain by weight vector g. Note that this function must be invoked after forward_backward to get the correct result.
void freefb()
Free the memory allocated by dynamic tables.
double approxlogsum(double p, double q)
Calculate an approximation of log( exp(p) + exp(q) ) by fewer cost operations.
void HMMDemo ( ostream & os, int L, const int *x, char (*Xdecode) (int), const int *path, char (*Pdecode) (int), int w = 64, const double *Weight = NULL )
Print out predicted path along given observed sequence and annotate by optional weight vector. The weight vector here has to reflect power of prediction at each position on the chain. Xdecode and Pdecode are two functions to translate symbol number and state number to suitable legible characters, respectively. w is number of columns in output window.
char Xdecode ( int f )
An instance application to translate numbers 1 to 6 to characters ‘1’-‘6’ related to “Occasionally Dishonest Casino” problem.
char Pdecode ( int p )
An instance application to translate state numbers 1 and 2 respectively to characters ‘F’ (Fair) and ‘L’ (Loaded), in “Occasionally Dishonest Casino” problem.
Ali Katanforoush, <a_katanforosh@sbu.ac.ir>, Department of Computer Science, Shahid Beheshti University, G.C., Tehran.
Time-stamp: "2010-09-18 20:17:53 katanforoush"