People are able to reach so-called somatosensory goals specified by proprioceptive (joint angles) and tactile information, without reliance on vision. Self-touch represents an important developmental process, allowing autonomous construction of a complex relationship between these two modalities, as parts of the body schema. Vision is not required for this process, although it does get involved in later stages of development. In this master thesis, we built upon an existing thesis by Martin Pecen in which he has implemented a biologically-inspired neural network model for this purpose. We concentrate on one of the proposed models BAL. Before associating the two modalities, both sets of input signals are topographically preprocessed using self-organizing maps. The main contribution of this work was expanding the data set and executing different experiments not done in the original work. The final data set consists of hundreds of samples, fully auto-generated, using the simulator of iCub by babbling its arms resulting in self-touch. The model achieved decent performance and generalization during both training and testing on both touch and non-touch data.