We would like to combine a few current technologies into a singular system for the purpose of converting 2D images to 3D interlaced images for lenticular printing. Please note that this project requires specific knowledge about lenticular technology and how it work. Would like to work with someone whom has experience in this field, knows the terminology, has intimite knowledge of bitmap manipulation, lenticular, anaglyphs, interlacing algorithms, etc.
There are 2 primary components:
1. Creation of a depth map from an uploaded image. A grayscale rendering is generated from the uploaded 2D image which allows the program to make assumptions about depth on the focal plane based on color, from light to dark.
2. Based on the depth map, areas are interlaced in accordance with a user-defined LPI number to match the lenses that will be used. There should be a simple depth adjustment slider that changes the parallax, or possibly just the distance on the focal plane with a fixed parallax..
A) User enters configuration parameters. These parameters determine how the image is rendered, and are:
* Canvas size
* Printer DPI
B) User uploads image.
C) Depth map is generated automatically.
D) User sets "level" of 3D effect via a simple depth slider. (less 3D <--> more 3D). Areas of the depth map are interlaced to create depth.
E) User clicks button to "generate 3D" which creates an animated simulation preview under one tab, and an interlaced version under a second tab.
D) User calls a crop tool which creates a movable light box to show the canvas proportions. When satisfied with the placement, user initiates the crop.
G) User can saves the output image. Remember that the output must meet the target DPI of the printer, and be interlaced for the specified LPI.
Ideally, user can insert movable and resizable layers which can also be rendered independently.
Also, we need a basic input/output API to interface with our existing application. It would work as such: Our application outputs file to 3D app, which opens and processes it. Once the user is satisfied with the result, he can output it back to our application. On the front end, it can be a simple hot folder that opens images dropped into it. On the output side, it is simply a user-defined output path. We can handle the rest via processing on our existing app.
Any lenticular expert will know that the output cannot be resampled or compressed. It must retain it's original size and resolution to work with the lenticular lenses.