Discrete Music Emotion Dataset
News & Media Articles
Tags and Keywords
Trusted By




"No reviews yet"
Free
About
the purpose of classifying music based on emotional content, specifically focusing on Turkish music. It operates using a discrete model, differentiating between four primary emotion classes: happy, sad, angry, and relax. The data consists of acoustic features extracted from selected verbal and non-verbal music samples gathered across various Turkish music genres. It is an ideal resource for developing and testing machine learning models focused on music information retrieval and emotion recognition systems.
Columns
The dataset contains 51 distinct columns, primarily composed of acoustic features derived from the music samples. The critical classification column is Class, which holds one of the four unique emotion labels (happy, sad, angry, or relax).
The remaining 50 columns include extensive mean values for various acoustic descriptors, such as:
- Root Mean Square Energy (
_RMSenergy_Mean
) - Low Energy (
_Lowenergy_Mean
) - Fluctuation (
_Fluctuation_Mean
) - Tempo (
_Tempo_Mean
) - Mel-Frequency Cepstral Coefficients (
_MFCC_Mean_1
through_MFCC_Mean_13
) - Roughness (
_Roughness_Mean
and_Roughness_Slope
) - Zero-crossing Rate (
_Zero-crossingrate_Mean
) - Attack Time (
_AttackTime_Mean
and_AttackTime_Slope
) - Rolloff (
_Rolloff_Mean
) - Event Density (
_Eventdensity_Mean
) - Pulse Clarity (
_Pulseclarity_Mean
) - Brightness (
_Brightness_Mean
) - Spectral features (Centroid, Spread, Skewness, Kurtosis, Flatness, and Entropy)
- Chromagram means (
_Chromagram_Mean_1
through_Chromagram_Mean_12
) - Harmonic Change Detection Function features (Mean, Std, Slope, Period Frequency, Period Amplitude, Period Entropy)
Distribution
The dataset is structured as a static file named
Acoustic Features.csv
, weighing approximately 172.39 kB. It contains a total of 400 samples, with 51 features recorded for each sample. The data is balanced, ensuring 100 samples are provided for each of the four emotion classes (happy, sad, angry, relax). Each original music sample used for feature extraction was 30 seconds in duration. All samples reviewed show 100% validity with no missing or mismatched values. The data is not expected to receive future updates.Usage
This resource is best utilised for research and development in several key areas:
- Multiclass emotion classification model training.
- Music information retrieval (MIR) studies focusing on cultural acoustic features.
- Building music recommendation systems or affective computing tools sensitive to emotional content.
- Acoustic analysis to understand the correlation between objective musical features and subjective emotional response.
Coverage
The scope of the data encompasses Turkish music, featuring both verbal and non-verbal selections from different genres. The dataset is fixed, representing a static snapshot of acoustic features used for emotion modelling.
License
Attribution 4.0 International (CC BY 4.0)
Who Can Use It
Intended users include data scientists creating emotion recognition models, machine learning practitioners working on audio feature analysis, researchers studying music and psychology, and academics focusing on cross-cultural analysis of music expression.
Dataset Name Suggestions
- Turkish Music Emotion Features
- Turkish Music Acoustic Emotion Classifier
- Discrete Music Emotion Dataset (Turkish)
- Turkish Song Emotion Data
Attributes
Original Data Source: Discrete Music Emotion Dataset