Simple 3D rendering Engine (OpenGL? Ogre? Godot?)
I need a simple and reasonably fast (read: graphics hardware not CPU) renderer to model data coming from a 3D camera system (similar to the Microsoft Kinect game controllers). This needs to be done with open-source but commercially usable APIs/Libraries/Engines. It needs to work on commodity desktop hardware, running Ubuntu Linux.
My overall objective is to programatically create/adjust models of the 3D environment observed by the 3D camera system, trying to match (parts of) the observed scene as closely as possible. This does not need to be at full frame rate (30 fps @ 512 x 424 from the Kinect One), but needs to be fast enough to be at least moderately iteractive (1fps would be OK, 0.1fps would not). Your problem is not to try to match the scene, just to provide me a rendering sytem that I can manipulate to try and do this.
I need to be able to programatically insert objects into the scene, some of which will contain rigging (both skeletal “bones” and “morphs / shapekeys”). These objects will come from Blender – you will tell me what to export, in what format (perhaps OpenGEX?), to make it useful to the rendering engine you produce. Note – make sure your chosen import method supports both skeletal rigging and morphs/shapekeys – Assimp for example doesn’t appear to have good support for morphs/shapekeys.
I then need to render the Z-buffer, from the viewpoint of a known camera matrix, so that I can compare the rendered Z-values to those returned from the real-world camera. I do not actually need to render the visual image (though will be useful to do for debugging putposes, so it’s a required feature but not at realtime speed).
I will then compare my observed real-world depth values to those from the Z-buffer, and make iterative modifications to the scene (e.g. altering translation, rotation, pose or morphs of objects). This is likely to be a simple case of least-squares depth differences between observed scene and rendered scene – however I will have to apply camera distortion corrections (these are quite fast to apply, so not a speed concern, and not your problem – just know that I’m aware I’ll need to do this).
I believe this should be a very simple implementation in OpenGL or similar, or a subset of one of the many open-source 3G rendering or game engines (e.g. OGRE or Godot).
The one complication is that the raw Z-buffer/DepthBuffer is frequently not directly accessable from game engines (though it is from OpenGL) – this is simply because moving that data from the graphics card to CPU space is considered slow (though on recent Intel processors, there is no external graphics card, so data is not physically moved about). So do check the capabilities of your chosen library/engine/API before bidding, as getting half-way through the process and then realising that your chosen platform doesn’t give easy access to the Z-buffer would be a problem.
My preference would be for code written in C/C++ and/or openCL, but if there are good reasons for a different language, let me know. For extra points, using Qt’s OpenGL bindings will put your bid ahead of the pack (assuming they are appropriate for this task – if not, just let me know why).
I can provide some sample 3D captured data, and Blender models of objects in the scene that match this data reasonably well, so that you can then guage the complexity of the task.
In placing a bid, you MUST state the engine/library/APIs you will be using, and confirm that this will allow for access to z-buffer/depthbuffer and mesh models with both skeletal and morph/shapekeys modification.