This paper describes our attempt of assessing the capability of music melodies in isolation in order to classify music files into different emotional categories in the context of Sri Lankan music. In our approach, Melodies (predominant pitch sequences) are extracted from songs and the feature vectors are created from them which are ultimately subjected to supervised learning approaches with different classifier algorithms and also with classifier accuracy enhancing algorithms. The models we trained didn’t perform well enough to classify songs into different emotions, but they always showed that the melody is an important factor for the classification. Further experiments with melody features along with some non-melody features showed us that those feature combinations perform much better, hence brought us to the conclusion that, even though, the melody plays a major role in differentiating the emotions into different categories, it needs the support of other features too for a proper classification.
This paper describes a method of modeling the characteristics of a singing voice from polyphonic audio signals, in the context of Sri Lankan Music. The main problem in modeling the characteristics of a singing voice is the negative influences caused by accompaniment sounds. Hence the proposed method consists of a procedure to reduce the effect of accompaniment sound. It extracts the predominant melody frequencies of the music file and then resynthesize it. Melody is extracted only on the vocal-parts of the music file to achieve better accuracy. Features vectors are then extracted from the predominant melody frequencies, which are then subjected to supervised leaning approaches with different classifier, using Principal Component Analysis as feature selection algorithms. The models trained with 10-fold cross validation and different combinations of experiments are done to critically analyze the performance of the proposed method for Singer Identification.