= Tutorial About Probabilistic Classification Models = Benjamin Adrian, Gunnar Grimnes, Jörn Hees, Matthias Sperber == Abstract == == Introduction == Classification in general is the problem of deciding for a given input to which class it belongs. Usually classification can be subdivided into a learning phase (aka training phase) and a classification phase (aka test phase). (TODO: offline, online, reinforcement, ... learning) === Basics === There are several basics to concern and understand before diving into probabilistic learning models. ==== Example / Instance ==== Examples or also called instances are the basic entities in this field. They occur as training examples, as validation or test examples, and finally as real data. {{{ E.g., In a document classification scenario, examples are documents. Already classified documents are used for training or evaluation purpose. }}} ==== Feature ==== A feature is a descriptive property of an example. Features are processible by machines. {{{ E.g., In a document classification scenario, features might be the words of a document. In consequence, single features might describe multiple examples (here documents) }}} ==== Feature Extraction ==== Feature extraction is the task of extracting features from examples. {{{ E.g., In our document classification scenario, a tokenizer that extracts words from text might be used for feature extraction. }}} In more sophisticated scenarios, feature extraction can be hierarchically nested by extracting new features from existing feature lists. {{{ E.g., In our document classification scenario, a word n-gram algorithm extracts n-gram features from extracted word sequences. }}} ==== Feature Selection ==== Each feature for each example has be processed by model trainers or executors. There are several reasons for selecting just subsets of existing features. First, not all features are useful for separating different classes. In details, there is no statistically significant dependency between class and feature occurance. {{{ E.g., In our document classification scenario, stop words or high frequent words are not useful for separating e.g., spam mails from ham mails. }}} Second, just a small set of features might be enough for classifiying examples successfully. Adding more just decreases performance. === Main Steps === 1. Convert your problem into a classification problem 1. Get a pre-classified data set (the more data or even data sets the better). Devide it into test, training and development sets. 1. Think about your features. This is the most important step! 1. Process data, extract these features, select significant ones and store them. 1. Train your model with your training data 1. Classify your test data. 1. Evaluate results === Relational Classification === Relational data consists of entities, described by features and statisitcal dependencies between entities. ==== Nearest Neighbor (1-NN or NN) ==== Nearest Neighbor classifiers are classifiers of the most simple kind. In the training phase they simply record the class for each sample. Later in the classification phase they calculate the distances of the query to all samples in their records and return the class of the sample which is closest to the query. ===== $k$NN ===== The $k$-Nearest Neighbor classifier is a generalization of the simple NN, which does not immediately return the single "best match" sample's class, but inspects the nearest $k$ samples to the query and returns a class depending on a merging function, such as: * most often observed class * classes weighted by inverse distances ==== Naive Bayes ==== Naive Bayes Classificators are {{{ #!latex \begin{equation} classify(f_1,\dots,f_n) = argmax_c \ p(C=c) \prod_{i=1}^n p(F_i=f_i\vert C=c). \end{equation} }}} ==== Maximum Entropy ==== ==== Multi Layer Perceptrons ==== ==== Support Vector Machines ==== === Sequential Classification === ==== Hidden Markov Model ==== ==== Conditional Random Field ==== A conditional random field is a conditional distribution {{{ #!latex $P(A|B)$ }}} with an associated graphical structure. == Appendix == === Mathematical Foundations === ==== Bayes Rule ==== {{{ #!latex $P(A|B) = \frac{P(B|A) \cdot P(A)}{P(B)}$ }}}