Leap motion integration system with unity to create VR reality
This project is meant to help deaf people or there families to be able to use Virtual Reality to learn sign language to be able to communicate with external world by mastering the deaf sign languages.
To create a virtual Reality learning environment that will be able to use there hand motions captures by using the leapmotion technology connected to a VR headset
Phase 1 the hardware ( input )
Be able to link Leap Motion technology to VR Unity Environment to communicate with a virtual Avatar which should have 5 fingers in each arm (human avatar)
1- Leapmotion Device
2- Unity 5 Software
3- VR – iPhone – Samsung – HTC - …etc
Phase 2 the software
The software should be able to store new words and communicate with deaf by readying those words and linking them to images or 3d modules that shares the visual meaning of an item and the emotion by the avatar to be translated to a sign language.
Using leapmotion as an input to store meanings and to recall signs based on words
Part A (input)
In this part public will be able to use VR + Leapmotion to input signs to the database linking them to the correct words
Part B (software process abilities)
A- Software should be able to directly link words to avatar sign language.
B- Software should also be able to understand the user signs and link them to words.
C- Learning mode (here users should be able to do any sign and be corrected by the avatar if the sign was done wrong).
D- Users should also be able to do part A.
E- System should be able to brows online sites and translate them.
F- System should be able to brows pdf documents and read them to sign language as well.
G- System should have the ability to directly connect with any video and recognize the voice and translate it to text then to sign language.
H- System also should be able to use voice to text recognition directly and translates it to letters or words.
Part C (output)
A- Software should be able to have api for 3rd parties such as:
a. Website avatar with words by extracting data to small short videos that will be able to view short vides of each word into a gif or video.
b. Hardware accessories that will clone the avatar or the device on a 3rd party hardware connected to the database.
c. Should be able to extract avatar emotions to any mobile app.
B- Reports of number of users and time of using the learning tool.