المستخلص: |
Living without effective communication poses significant challenges for humans. Individuals employ various methods to convey and share their thoughts and ideas between the sender and receiver. Two of the most prevalent means of communication are verbal speech, which relies on auditory perception, and non-verbal communication through gestures involving bodily movements such as hand gestures and facial expressions. Sign language, specifically categorized as a gestural language, is a unique form of communication that relies on visual perception for understanding and expression. While many individuals incorporate gestures into their communication, for deaf individuals, sign language is often their primary and essential means of communication. Individuals who are deaf and dumb rely on communication to interact with others, gain knowledge, and engage in their surroundings. Sign language serves as a crucial link that reduces the distance between them and the broader society. In order to enhance this communication, we've created models with the ability to identify sign language gestures and translate them into conventional text. Through the training of these models on a dataset employing neural networks, remarkable outcomes have been attained. This technology enables individuals, without prior knowledge of sign language, to understand the intentions and messages of individuals with disabilities, fostering greater inclusivity and accessibility in our society. Three algorithms were used to achieve this work and the findings show very good outcomes, i.e. Random Forest at 98%, Logistic Regression at 99%, and Decision Tree at 91%.
|