This project proposes the development and improvement of several scalable interactions on our User Interface (UI) within a Medical Imaging (MI) scope. The purpose of this UI is to provide clinicians an efficient interaction among the used MI technologies. Our project deals with the use of a recently proposed technique in literature: Artificial Intelligence (AI) models. These models will incorporate information from several different modes by a User Interface (UI) with AI-Assisted techniques behind. Therefore, we aim to improve our already implemented Medical Imaging (MI) Assistant that will assist users across diagnosis.
The following information will detail each task and what is supposed to do. For this issue, the number of tasks is a threefold: (1) Parallel Views; (2) Automatically Center; and (3) Coordinate Viewports. Those tasks will be detailed on other issues.
When the user opens one of the available views, the system will open automatically both Left (L) and Right (R) sides. First of all, the user must split the viewport by two. For instance, it is possible to choose 1x2 or 2x1 options. So that, we can pair both views.
For now, each time the user zoom-in or zoom-out, for instance, the left limit of the image is not shifted for the center of the main viewport. The main viewport will be the limit between the two views of the image. The idea of this task is to develop a mechanism that will "center" the two images (if the viewport is split by two on a 1x2 option) so that the limits of the image be on a side-by-side fashion.
When splitting the main viewport into several other viewports (e.g., 1x2, 2x1 or 2x2), the idea is to have an activation button to apply the same feature among all open viewports. That will improve the diagnosis time. At least, hypothetically speaking. To better understand the meaning of this task, we want to provide the following behavior. When the user zoom-in on the first viewport, for instance, the zoom-in ratio will also be applied to the others viewports.