The support-vector network is an innovative learning machine designed for two-class classification tasks. 
The fundamental idea behind this approach is that input vectors are mapped non-linearly into 
a high-dimensional feature space. In this space, a linear decision boundary is then created. The unique 
characteristics of this decision boundary contribute to the strong generalization ability of the learning 
machine. While the support-vector network was originally developed for the special case where training 
data can be perfectly separated, we extend this concept to handle non-separable data. The high 
generalization capability of support-vector networks with polynomial input transformations is demonstrated. 
Additionally, we compare the performance of the support-vector network with various classical learning 
algorithms, all of which were part of a benchmark study on Optical Character Recognition.