I have the project that is able to process frames from iOS camera live stream to segment the certain object(hand) and find its outline using OpenCV. The problem is that the processing time is too long, so it doesn't look like real time and the screen is very solid and sometimes it is crashed and stiff.
My app is capturing and processing each frame. iPhone camera fps is 30fps. It usually takes +10fps to process one frame and segment the object.
So I think that we don't need to capture and process each frame. We have to capture only one frame when the camera focus is best and recognize the hand most clearly only once and process the frame to find an outline. After that, we need to track the object(hand) and move the hand outline according to the hand movement instead of processing each frame for segmentation.
Totally we can describe our problems like this.
1. To recognize when the camera focus is best for capturing the object(hand).
2. To move or transform the outline(hand contour) according to the hand movement.
You have to have contact always or voice call with me to solve the problems as soon as possible because our project is a little difficult and consists of too many technical computer vision functions.
If possible we can share our current status using any desk or TeamViewer.
This job has to be finished for 3-4 days.
Don't worry about the payment. I am new on this site so I don't know how to hire freelancers.
If you can do our job wonderfully, we can pay you bonus more than 500$.