Robust ISL Alphabet Gesture Classification Dataset
Knowledge & Research Collections
Tags and Keywords
Trusted By




"No reviews yet"
Free
About
Classifying Indian Sign Language (ISL) gestures requires a robust and diverse set of training data to prevent model overfitting. This collection focuses on hand landmarks rather than raw images, capturing the intricate details of alphabet signs from A to Z. By documenting landmarks at various angles, rotations, and perspectives, the data provides the necessary variability to build highly accurate classification models. To ensure robustness, the records include interchanged hands for single-hand gestures and account for non-symmetrical signs, while a specific feature-engineered column identifies the use of one or two hands to significantly boost model performance.
Columns
- target: The categorical integer representing the sign (0 for A, 25 for Z).
- uses_two_hands: A binary indicator (0 or 1) identifying whether the gesture requires both hands.
- left_hand_x_0: The horizontal coordinate for the first landmark of the left hand.
- left_hand_y_0: The vertical coordinate for the first landmark of the left hand.
- left_hand_z_0: The depth coordinate for the first landmark of the left hand.
- left_hand_x_1: The horizontal coordinate for the second landmark of the left hand.
- left_hand_y_1: The vertical coordinate for the second landmark of the left hand.
- left_hand_z_1: The depth coordinate for the second landmark of the left hand.
- left_hand_x_2: The horizontal coordinate for the third landmark of the left hand.
- left_hand_y_2: The vertical coordinate for the third landmark of the left hand.
Distribution
The data is delivered in a single CSV file titled
Indian Sign Language Gesture Landmarks.csv with a size of approximately 113.06 MB. It contains roughly 50,900 valid records, with each alphabet class (A-Z) featuring approximately 2,000 entries. The file structure consists of 128 columns, providing a high level of detail for hand positioning, with 100% data integrity reported across all fields.Usage
This resource is ideal for training deep learning models in the field of computer vision and gesture recognition. It is well-suited for building accessibility tools that translate Indian Sign Language into text or speech in real-time. Researchers can also use the landmark coordinates to perform feature selection or to benchmark the performance of different neural network architectures in a classification context.
Coverage
The geographic scope is centred on India, specifically focusing on the gestures used in Indian Sign Language. The content covers the full English alphabet (A-Z) represented through sign. Temporally, the data is provided as a static snapshot intended for robust model training without the need for frequent updates.
License
CC BY-SA 4.0
Who Can Use It
Machine learning engineers can leverage these records to develop more reliable sign language recognition systems. Academic researchers specialising in human-computer interaction can utilise the landmarks to study the ergonomics and spatial patterns of ISL. Furthermore, students working on intermediate-level computer vision projects will find this a valuable alternative to image-based datasets which are often prone to overfitting.
Dataset Name Suggestions
- Indian Sign Language (ISL) Hand Landmarks Registry
- Robust ISL Alphabet Gesture Classification Dataset
- Indian Sign Language Spatial Coordinates for Machine Learning
- ISL Gesture Landmark Corpus: A-Z Alphabet
- High-Variability Indian Sign Language Hand Tracking Data
Attributes
Original Data Source: Robust ISL Alphabet Gesture Classification Dataset
Loading...
Free
Download Dataset in CSV Format
Recommended Datasets
Loading recommendations...
