Sign Language with PyTorch GRU

3 weeks ago 23
ARTICLE AD BOX

I'm currently training a GRU model on American Sign Language (ASL) using a Kaggle dataset,
while tweaking the parameters I achieved a peak accuracy of 44.7% on training and 28.2% on testing,
that is after using extracting keypoints from the videos, normalizing them and use Attention pooling before passing to classifier
I tried changing the hidden size, dropout values, number of layers, yet couldn't get past those values

I honestly don't know what more can i do, any suggestion is welcome

Link to the code: https://github.com/HmedNejjar/Sign-Language
Dataset i used: https://www.kaggle.com/datasets/risangbaskoro/wlasl-processed

Using: Python 3.12 PyTorch 2.6.0+cu124 MediaPipe 0.10.13 numpy 2.4.2 OpenCV 4.13.0

Files to download: Landmarkers: Hand Landmarker Pose Landmarker

Run in terminalInvoke-WebRequest -Uri 'Landmarker link' -OutFile "'landmarker'.task"

Order of programs to run:

precompute.py train.py main.py
Read Entire Article