In this paper we deal with the problem of efficient learning of feedforward neural networks. First, we consider the case when the objective is to maximize the ratio r of the correctly classified points compared to all points in a training set. We show that, for an arbitrary multilayered threshold network with varying input dimension and with a constant n_1 >= 2 neurons in the first hidden layer, it is NP-hard to approximate r within a relative error of at most \varepsilon=1/(68 n_1 2^{n_1}+ 136 n_1^3+ 136 n_1^2 +170n_1). If n_1 >= 3, then the above result is true with \varepsilon=c(n_1-1)/(5n_1^3+2n_1^32^{n_1}+4n_1^5+4n_1^4) for some positive constant c, even if restricted to situations where a solution without any misclassifications exists. Restricted to architectures with only one hidden layer and two hidden neurons approximation of r with relative error at most 1/c is NP-hard even if either (a) c=2244, the threshold activation function in the hidden layer is substituted by the classical sigmoid function, and the situation of \epsilon-separation of the output classification is assumed, or (b) c=2380 and the threshold activation function is substituted by the semilinear activation function commonly used in the neural net literature. For a single hidden layer threshold network with varying input dimension and k hidden nodes, approximating r within a relative error of at most c/k^5, for some positive constant c, is NP-hard even if restricted to situations where the number of examples is at most k^4. Finally, we consider the case when the objective is to minimize the failure ratio f in the presence of missclassification errors. We show that it is NP-hard to approximate f within any constant c>1 for a multilayered threshold network if the input biases are fixed to zero.