info@biomedres.us   +1 (720) 414-3554
  One Westbrook Corporate Center, Suite 300, Westchester, IL 60154, USA

Biomedical Journal of Scientific & Technical Research

July, 2019, Volume 19, 5, pp 14678-14685

Research Article

Research Article

What is the Future of Minimally Invasive Sinus Surgery: Computer-Assisted Navigation, 3D-Surgical Planner, Augmented Reality in the Operating Room with ‘in the Air’ Surgeon’s Commands as “Biomechanics” of the New Era in Personalized Contactless Hand-Gesture Non-Invasive SurgeonComputer Interaction?

Klapan Ivica1,2,3,4*, Duspara Alen6, Majhen Zlatko5,7, Benić Igor8, Kostelac Milan8, Kubat Goranka9, Berlengi Nedjeljka10, Zemba Mladen10, Žagar Martin11

Author Affiliations

1 Division of ENT-Head and Neck Surgery, Klapan Medical Group Polyclinic, Zagreb, Croatia, EU

2 Josip Juraj Strossmayer University of Osijek, School of Medicine, Osijek, Croatia, EU

3 University of Zagreb, School of Medicine, Zagreb, Croatia, EU

4 Josip Juraj Strossmayer University of Osijek, School of Dental Medicine and Health, Croatia, EU

5 Bitmedix, Zagreb, Croatia, EU

6 Faculty of Electrical Engineering and Computing, University of Zagreb, Croatia, EU

7 Avanza, Zagreb, Croatia, EU

8 Faculty of Mechanical Engineering and Naval Architecture, University of Zagreb, Croatia, EU

9 Division of Radiology, Agram Special Hospital, Zagreb, Croatia, EU

10Division of Anesthesiology, Klapan Medical Group Polyclinic, Zagreb, Croatia, EU

11Rochester Institute of Technology, RIT Croatia, Zagreb, EU

Received: July 11, 2019 | Published: July 23, 2019

Corresponding author: Ivica Klapan, Klapan Medical Group Polyclinic, Ilica 191A, HR-10000 Zagreb, Croatia, EU

DOI: 10.26717/BJSTR.2019.19.003377

Abstract

Purpose: We were focused on the development of personal-3D-navigation system and application of augmented reality in the operating room per viam personalized contactless hand-gesture non-invasive surgeon-computer interaction, with higher intraoperative safety, reduction of operating time, as well as the length of patient postoperative recovery.

Methods: Simultaneous use of video image, 3D anatomic fields and navigation in space, with the application of our original special plug-in application for OsiriX platform, enabling users to use LM-sensor as an interface for camera positioning in 3DVR and VE views, and integrating speech recognition as a VC solution, in an original way.

Results: Management of image 2D-3D-video-medical documentation, as well as the control of marker-based virtual reality simulation in real time during real operation with per viam our personalized contactless “in the air” surgeon’s commands.

Conclusion: The use of modern technologies in head and neck surgery in the last 30 years (e.g., FESS, NESS, and robotic surgery) has enabled surgeons to demonstrate spatial anatomic elements in the operating field, which was quite inconceivable before. This approach has not yet been used in rhinosinusology or otorhinolaryngology, and to our knowledge, not even in general surgery. The question that we have to ask ourselves now is what prerequisites, realizable in the future, should be realized in our “on the fly” gesturecontrolled and incisionless virtual surgical interventions for the eventual utilization to meet the most demanding requirements in the OR?

Abbreviations: API: Application Program Interface; AR: Augmented Reality; CAS: Computer Assisted Surgery; FESS: Functional Endoscopic Sinus Surgery; FDA: Food and Drug Administration; HW: Hardware; IDE: Integrated Development Environment; IT: Information Technology; LM: Leap Motion; MIS: Minimally Invasive Surgery; MRI: Magnetic Resonance Imaging; MSCT: multislice computer tomography; NCAS: Navigation Computer Assisted Surgery; NESS: Navigation Endoscopic Sinus Surgery; OR: Operation Room; OMC: Ostiomeatal Complex; ROI: Region Of Interest; SDK: Software Development Kit; SI: Swarm Intelligence; SW: Software; VC: Voice Command; VE: Virtual Endoscopy; VS: Virtual Surgery; VR: Virtual Reality; VRen: Volume Rendering; VW: Virtual World; 2D: TwoDimensional; 3D: three-Dimensional; 3DV3: 3-Dimensional Volume Rendering

Keywords: Gesture control; Voice commands; Region of interest; 3D volume rendering; Leap Motion; OsiriX MD; Virtual endoscopy; Virtual surgery; Contactless surgery; Swarm intelligence

Introduction

Since the very beginning of this most prestigious field of medicine, surgeons all over the world have always tended to conceive and then implement perfect, ideal conditions in the OR. Improved conditions in surgical practice and considerably longer patient survival have been achieved since the time of Sir Joseph Lister, Bt., a British surgeon and pioneer of antiseptic surgery, through the use of numerous innovations in preoperative planning, intraoperative procedures and postoperative analysis of surgical procedures [1], in particular, the use of MIS. At the turn of the 20th and 21st century, a new developmental incentive in surgery emerged with the advent of 3D-visualization of anatomy and pathology and differential-color imaging of various tissues, consistent with transparency difference (pixels) on digital 2D-256-level-gray medical diagnostic images.

It was followed by development of 3D-computer assisted surgery (CAS) [2] and navigation CAS (N-CAS) at the beginning of the 1990s with the use of 3D-digitalizers (‘robotic arm’) [3-5], tele3D-CAS [6,7], VR-techniques [1,8] and virtual simulators in surgery [9], facilitating orientation in space (with ‘six degrees of freedom’) to the pioneers in this type of surgery. Nowadays, at the beginning of the 21st century, the use of CAS/robots in surgery has taken hold [10,11] having substantially expanded the limits of operability in many borderline operable or inoperable patients [12]. For this purpose, we used OsiriX MD (certified for medical use, FDA cleared and CE II labeled; the most widely used DICOM viewer, with advanced post-processing techniques in 2D-3D, as well as for 3D-4D navigation) [13], Leap Motion (computer hardware sensor device that supports hand and finger motions as input, which requires no hand contact or touching) [14] and our specially designed SW that integrates Leap Motion controller with medical imaging systems.

Patients and Methods

OsiriX MD

In a vast selection of available tools that OsiriX provides to the end-user, there is a subset that significantly simplifies working with ROI segments. Freehand tools like pencil, brush, polygon and other similar shapes enable precise selection of segments, whereas the more sophisticated 2D/3D grow region tool can automatically find edges of the selected tissue by analyzing surrounding pixel density and recognizing similarities. Using a graphics tablet as a more natural user interface within the OsiriX image, the editor can simplify the segmentation routine and improve selection precision.

Our Contactless-Hand-Gesture Non-Invasive Surgeon-Computer Interaction

Figure 1: OR monitors for 3D-VE/VR-visualization of the operative field.

We have developed our special plug-in application for OsiriX platform, enabling users to use LM sensor as an interface for camera positioning in 3DVR and VE views (Figure 1) and integrating speech recognition as a VC solution, in an original way, which is not comparable with some projects already found on the market. We have defined different types of gestures for 3DVR and VE, which enable navigation through virtual 3D space, adjusting the viewing angle and camera position, using only one hand. Consequently, we have minimized surgeon distraction while interacting with the positioning system. We have chosen gestures that people can easily correlate with natural movements while interacting with real-world objects, making navigation as fast as possible, while maintaining precision and shortening learning time.

Aiming at easier navigation through the application and using its functionalities with contactless interfaces, we used VC. We enabled utilizing application functions in an unobtrusive manner, independently of the currently active application view, 3DVR or VE. Furthermore, switch VC allows the user to toggle the currently active application view. All ROIs in different slices that are associated with each other should be named the same, forming an aggregate that represents a 3D segmentation object. These 3D regions are assigned with a keyword that is used as a VC, enabling the user to actively adjust the visibility states of regions in visualization preview. It is advisable to mark regions using contrasting colors for easier recognition when multiple ROI segments are present in the current view. Keywords used for VCs are simple and short, which we found to be more performant when using offline voice recognition. This also means that VC recognition is not limited by network speed or availability, thus adding further stability and reliability to the system. The navigation in virtual space is achieved by assigning three orthogonal axes, as shown in Figure 2, to the camera position control. In 3DVR, the Y-axis is translated to vertical orbit, the X-axis to horizontal orbit, and the Z-axis to zoom. The VE gains acceleration based on the difference in the value of the Z starting and ending position. Horizontal and vertical camera turning is assigned to the X and Y axis, respectively.

Figure 2: Navigation in virtual space is achieved by assigning three orthogonal axes to the camera position control.

System Tenet

A number of relevant factors were taken into consideration on defining the positioning of the computer and LM controller in OR, as follows:

Position of LM Controller Relative to the Surgeon, Anthropometry, Definition of Space Ergonomics: The surgeon was supposed to be right-handed and sitting during the surgery, therefore LM controller was placed to the right of him, within the perimeter of his right hand reached out. Although LM controller can be placed in a vertical position, upon testing we concluded that placing it horizontally was its ideal position, with the palm positioned at 15-20 cm above the controller. In this way, we ensured the basic conditions for the surgeon to turn his palm according to the direction, height, and depth of the space around the operating table without the need of changing the position of the chair, standing up, or rotating his body. This is especially important when the surgeon holds the endoscope or some other instrument, preventing undesired shifting the instrument held by his left hand because there is no change in the surgeon’s body position (no rotation, height change, inclination, etc.).

Position of the Monitor (Distance, Height, Inclination), Image Quality: Mechanical hand serving as a holder of all monitors is mounted on the OR ceiling, enabling the precise positioning of the monitors according to the space direction, height and depth relative to the surgeon’s field of vision. Two monitors are mounted on the mechanical hand:

a. Monitor for 2D-MSCT/MRI slices/nasal-sinus movies/ base and for endoscopic presentation of the operative field ‘live video image’ (monitor screens are changed by the surgeon’s VC or by remote control regulated by the assistant; and

b. Monitor for separate VE image and 3D-animation models for VK per viam DICOM viewer/OsiriX-LM (Figure 3).

Figure 3: LM-VE-3DVR-VC in our OR (taken with permission of Klapan Medical Group Polyclinic, Zagreb, Croatia, EU).

Position of Auxiliary Staff Members (Anesthesiologist, Nurse, Scrub Nurse, Assistant): Positioning of the auxiliary staff members is determined by the respective plan as in any standard OR. A substantial precondition to be met on developing this new system was that the auxiliary staff members could approach the LM controller without distracting and entering the field of surgeon’s work.

Monitor Technical Properties (Calibration for DICOM Images): Native resolution of our monitors is 1920x1080, and they can display original, full HD images in 1920x1080 resolution without blurring or distortion [15]. For presentation of DICOM images and 3D-rendered models we used Dell Ultrasharp U2413 monitor because it has the highest quality color reproduction, wide-angle viewing, images can be viewed simultaneously, there is very high pixel density and compliance with key medical standards [15] (NEMA DICOM; ICC profiles can be used for reviewing of color medical images [16]). For calibration purposes, we used QUBYX Perfectum SW. The third monitor we use in OR serves for telemedicine consultations and operation monitoring; it is mounted on another, separate wall bracket (Figure 4).

Figure 4: Position of three monitors in the OR during our 3D-VE-NES, which integrates LM controller with medical imaging systems with ‘in the air’ real control of surgeon’s hands.

Computer, OS, Cables: Computer (Apple Mac mini) is placed at the back of the Dell monitor. It is of small dimensions and mass; thus, it does not add much weight upon the mechanical hand carrying the computer and the monitor. In this way, the length of the cables (power cable, USB cable, HDMI cable) has been reduced to only 30-40 cm. A special short metal holder designed for LM controller is fastened under the monitor.

Case Report

A 22-year-old male patient, a student, presented for a sixmonth history of recurrent nasal blockage and associated pain, which extended over the region of the maxillary sinuses, sometimes reaching up to the frontal sinuses. Medical therapy proved inefficient. The patient had received therapy with Decortin [17] mg/day for 2 weeks, with dose tapering, at another medical institution. At our institution, the patient underwent rhinoscopy, followed by flexible fiberoptic nasopharyngoscopy, which confirmed the diagnosis of nasal septum deviation with a predominant septal spur impressing the soft tissue wall of the left nasal cavity. However, a large bluegray opaque growth was detected, completely occupying OMC, all middle and posterior segments of the left nasal cavity, and the entire epipharynx to the border of the oropharyngeal region.

On diagnostic endoscopy, great amounts of thick mucus were found to have migrated to the left nasopharynx, from the middle meatus. On the left, there was a mass of soft tissue between the middle turbinate and the lateral nasal wall; behind these, thick mucus was seen from the natural ostium of the maxillary sinus. There was an accessory ostium in the posterior fontanelle and thick mucus entered the sinus here again, ‘travelling in circles’. We believed that the patient was a candidate for rhinosurgery. We also considered this state to be a good indication for septoplasty, laser surgery of both inferior nasal turbinates, and, for the first time, the VE-VS-LM-NESS, during which we wanted to ensure ventilation of the natural ostia and thus hopefully provide the patient not only with improved airways, but remove the patho-tissue and break the vicious circle of secretion. Prior to the operation, multiple MRI acquisitions were obtained through the nose, paranasal sinuses and skull base, with 2D reconstruction.

The left maxillary sinus was completely obstructed (there was no demineralization or erosion of the sinus walls), and so were the ipsilateral OMC and frontoethmoidal recess. No abnormalities were observed in the ipsilateral anterior and posterior ethmoid air cells, or in the contralateral paranasal sinuses. Mucosal thickening caused narrowing of the left sphenoethmoidal recess, but this region remained patent. There was deviation of the nasal septum to the left, with a septal spur abutting the left inferior and medial nasal turbinate, entering the left middle meatus, with additional hypertrophy of both inferior nasal turbinates. The left concha bullosa was partly opacified with fluid. No abnormality of the maxillary dental row was evident. The mastoid air cells were patent bilaterally. No aggressive osseous lesion was evident, and there was no abnormality of the orbits and/or nasopharynx (Figure 5).

Figure 5: Preoperative 2D- MSCT slices of the nose and sinuses.

Results

Technical and Implementation Details

As part of research activities at Bitmedix (www.bitmedix.com) in the field of natural gesture interfaces, a plugin application for the OsiriX platform was developed. The provided OsiriX framework is used to the interface and control the built-in volume rendering and VE viewer. The OsiriX platform is written in Objective C language for the MacOS devices. The plugin application is developed in Swift language in order to achieve a more robust design. The additional bridging headers were required to integrate the functions from one programming language to another. For the hand gesture recognition and position tracking a Leap SDK was used. OsiriX application interface does not provide unlimited access to the internal functionalities of the platform. To create a natural gesture interface, full control of the camera in 3D space of VRen and VEview is needed.

The functions for camera control are protected by the class and to access them a static service class must be created. At the start of the application, an LM-controller instance will be created. This instance will have registered callback functions for events received from the device. One of the important events is receiving a frame where the frame handler function is processing the information received from the device. The frame handler for the volume rendering viewer is slightly different from the handler for the VE. Both handlers are triggered only when the right hand with the closed fist enters in the sensor area. Also, all of the sensor inputs are buffered to avoid over-sensitivity. In the VRen frame handler, the azimuth of the camera is controlled with the x component of the vector received from the device, the elevation is controlled with the y component and zoom with the z component. This method gives the user the ability to experience the orientation in the 3D space as orbiting over the center of the focused object. Level of the rendering details can be lowered while the frame handler is triggered to get more fluent interface.

In the VE frame handler, x- and y-components are used to control the yaw and pitch of the camera. The z-component is then used as an acceleration value for the camera travelling speed. The current direction and position of the camera can be obtained from the OsiriX application interface. The camera position in view and its focal point are then translated in the direction of the camera orientation. This gives the user a fly-through feature in VR. The plugin is built on the MacOS and it can be installed on the OsiriX platform. The plugin application will run in the background of the OsiriX environment and wait for the device connection event. If the device is recognized on the host machine, the LM controller will initialize and wait for the VRen or VE view to be opened. After the supported view is opened, the sensor is ready to use. This interface can be used alongside with the classical mouse interface without any interfering [18].

Discussion

First evaluation of our system allowed us to observe the users operating 3DVR and VE and adopting the principles of our contactless interface proposed. We have come to some conclusions that will help us in our future work. We found it possible to significantly simplify movement gestures in the virtual space of VE as opposed to the initial design. Using the absolute value of hand position in the sensor coordinate system as a deflection of camera angles requires additional space and involves a considerable amount of hand movements. We propose using angles of palm position as the values that represent the speed of camera deflection. This approach would ensure more precision while navigating through the VE space while also reducing working complexity.

Observing user interaction with the application, we observed that there was the need of VCs that could reveal additional parameters and application functionalities using a dashboard interface that can either overlay the current view or be shown on a separate computer screen if the environment allowed it. The 3DVR and VE views could then be used uninterrupted and unburdened with unnecessary data when not required, independently of which application environment and settings were used. Such dashboard could contain the most important functions and parameters concerning the current VR17 or VE context. Some of the functions and parameters could be:

a. Currently active context (3DVR or VE) and the possibility to switch between them,

b. A list of ROIs defined before the operation,

c. Parameters for each ROI such as region color,

d. Transparency and activation/deactivation from current context,

e. A list of VE travel paths which contains a set of belonging checkpoints,

f. Crop function activation, and

g. Other contextual OsiriX application functions and parameters.

When choosing an ROI or checkpoint to be activated, the user should be presented with a clear 3DVR or VE view with an unobtrusive list of ROIs or checkpoints with associated VCs. For our team, this routine preoperative18, as well as intraoperative procedure in our OR, which enabled very precise and simple manipulation of virtual objects with the sense of physical presence at a virtual location, represents a more effective and safer endoscopic and VE procedure in our hands, in comparison with some ‘standard’ endo-techniques. Fast and variable multiplanar reconstructions evaluating structures inside/outside the operation field, can be repeated over and over without the need of preoperative diagnostic work-up, and performing various types of operations, which is one of the crucial features of this procedure (Figure 6).

Figure 6: Preoperative planning in our LM-VE-3DVR-VC activities in the surgery of the nose, sinuses and skull base, with different 2D and 3D models of human head (different aspects of view).

The surgeon defines commands by simple hand movements ‘in the air’, which enable him (and all other medical staff in the OR) virtual navigation (3D-VE), as well as VS in the imagined anatomic world that does not exist in reality (3D-VE, VS). In this way, future decision making that will follow in the real world during the operative procedure in the patient lying on the operating table has been previously coordinated in the virtual surrounding. Indeed, it is the major reason for the purposeful application of VR in OR. It makes a highly potent tool available to the physician-surgeon; if used in a rational and proper way, it can change substantially the outcomes of until now complicated and serious operative procedures, relying on the possibility to assess their real feasibility. There is an opportunity to choose from a number of different approaches and performances in future operations, and finally to select the safest, relatively fast and successful approach to the operative field.

In our future developmental N-OsiriX-LM-VE plans, we intend to define precisely and accurately the target points of interest in diagnostic VE/VS on the virtual model of the patient’s head, which will then serve for real orientation in space during the operation in the respective patient. This should enable the use in real operation of a very faithful VS ‘copy’ from the VE model (as faithful as possible). This will lead to ‘predictability’ of the future real operation, while successfully obviating all the potential errors [19] that may emerge in the real operation. Then, we will determine the parameters for real assessment of the real operative procedure, its safety, length, as well as the elements required to substantially reduce operative trauma and improve postoperative recovery.

Conclusion

The ultimate goal of this innovative surgical procedure is to allow the presentation of virtual objects to all of the human senses in a way identical to their natural counterpart [9,20]. This technique will enable surgeons to get complete and aware orientation in the operative field, where ‘overlapping’ of the real and virtually created anatomic models is inevitable. New ‘spatial experience’ in the OR must be comprehensively and correctly recognized in each segment of the operation. Precise differentiation of pathology and normal soft tissue differences, as well as fine bony details, in ideal conditions [20], are basic conditions in which our SW can be employed, integrating LM-controller with medical imaging systems only with ‘in the air’ real control by surgeon’s hands. In this application, it is very easy to integrate real and 3D-virtual objects (with high rendering speed without reducing visual quality), making it necessary to present and manipulate them simultaneously in a single scene1, with development of hybrid systems referred to as AR systems [21], or similar [22].

In this way, three-way communication between the endocamera, VE-LP-monitors and the surgeon’s senses will provide active virtual tracking, with detecting abnormal patterns in rhinologic endoscopy [23] (any model and/or virtual model of the surgical field is defined as it actually exists in its natural surroundings of the surgical field). This enables the surgeon’s control of the whole OR from one place, with no touchscreens or additional personnel activities, as well as to run the system and terminate successfully the operation itself. Our human mind and understanding of this new surgery work by creating completely new models of human behavior and understanding spatial relationships, along with devising assessment that will provide an insight into our human nature. In our experience, we need to find out the ‘best clinical practice’ per viam the best VR application in rhino-OR, which will be in the future a part of medical SI decentralized, self-organized systems, in a variety of VR-fields in medical medicine as well as in the fundamental research [24].

Acknowledgment

The authors are grateful to Professor Heinz Stammberger, M.D.(†), Graz/Austria/EU, for his helpful discussion about NESS and contactless surgery (February/2018), and Dr. Armin Stranjak, Lead software architect at Siemens Healthcare, Erlangen, Germany, EU, for his discussion about MSCT and MRI nose/sinus scans.

References

Research Article

Nanocubes of Palladium, Simple, Green Approach and Catalytic Properties Under Continuous Hydrogenation System

Klapan Ivica1,2,3,4*, Duspara Alen6, Majhen Zlatko5,7, Benić Igor8, Kostelac Milan8, Kubat Goranka9, Berlengi Nedjeljka10, Zemba Mladen10, Žagar Martin11

Author Affiliations

1 Division of ENT-Head and Neck Surgery, Klapan Medical Group Polyclinic, Zagreb, Croatia, EU

2 Josip Juraj Strossmayer University of Osijek, School of Medicine, Osijek, Croatia, EU

3 University of Zagreb, School of Medicine, Zagreb, Croatia, EU

4 Josip Juraj Strossmayer University of Osijek, School of Dental Medicine and Health, Croatia, EU

5 Bitmedix, Zagreb, Croatia, EU

6 Faculty of Electrical Engineering and Computing, University of Zagreb, Croatia, EU

7 Avanza, Zagreb, Croatia, EU

8 Faculty of Mechanical Engineering and Naval Architecture, University of Zagreb, Croatia, EU

9 Division of Radiology, Agram Special Hospital, Zagreb, Croatia, EU

10Division of Anesthesiology, Klapan Medical Group Polyclinic, Zagreb, Croatia, EU

11Rochester Institute of Technology, RIT Croatia, Zagreb, EU

Received: July 11, 2019 | Published: July 23, 2019

Corresponding author: Ivica Klapan, Klapan Medical Group Polyclinic, Ilica 191A, HR-10000 Zagreb, Croatia, EU

DOI: 10.26717/BJSTR.2019.19.003377

Abstract

Purpose: We were focused on the development of personal-3D-navigation system and application of augmented reality in the operating room per viam personalized contactless hand-gesture non-invasive surgeon-computer interaction, with higher intraoperative safety, reduction of operating time, as well as the length of patient postoperative recovery.

Methods: Simultaneous use of video image, 3D anatomic fields and navigation in space, with the application of our original special plug-in application for OsiriX platform, enabling users to use LM-sensor as an interface for camera positioning in 3DVR and VE views, and integrating speech recognition as a VC solution, in an original way.

Results: Management of image 2D-3D-video-medical documentation, as well as the control of marker-based virtual reality simulation in real time during real operation with per viam our personalized contactless “in the air” surgeon’s commands.

Conclusion: The use of modern technologies in head and neck surgery in the last 30 years (e.g., FESS, NESS, and robotic surgery) has enabled surgeons to demonstrate spatial anatomic elements in the operating field, which was quite inconceivable before. This approach has not yet been used in rhinosinusology or otorhinolaryngology, and to our knowledge, not even in general surgery. The question that we have to ask ourselves now is what prerequisites, realizable in the future, should be realized in our “on the fly” gesturecontrolled and incisionless virtual surgical interventions for the eventual utilization to meet the most demanding requirements in the OR?

Abbreviations: API: Application Program Interface; AR: Augmented Reality; CAS: Computer Assisted Surgery; FESS: Functional Endoscopic Sinus Surgery; FDA: Food and Drug Administration; HW: Hardware; IDE: Integrated Development Environment; IT: Information Technology; LM: Leap Motion; MIS: Minimally Invasive Surgery; MRI: Magnetic Resonance Imaging; MSCT: multislice computer tomography; NCAS: Navigation Computer Assisted Surgery; NESS: Navigation Endoscopic Sinus Surgery; OR: Operation Room; OMC: Ostiomeatal Complex; ROI: Region Of Interest; SDK: Software Development Kit; SI: Swarm Intelligence; SW: Software; VC: Voice Command; VE: Virtual Endoscopy; VS: Virtual Surgery; VR: Virtual Reality; VRen: Volume Rendering; VW: Virtual World; 2D: TwoDimensional; 3D: three-Dimensional; 3DV3: 3-Dimensional Volume Rendering

Keywords: Gesture control; Voice commands; Region of interest; 3D volume rendering; Leap Motion; OsiriX MD; Virtual endoscopy; Virtual surgery; Contactless surgery; Swarm intelligence