Screen Shot 2014-10-19 at 10.45.09 AM


COGNITIVE SYSTEMS – Sonification and Spatial Cognition for Surgical Learning and Assistance.

Principle Investigators: Christian Freksa & Holger Schultheis
Work Packages: 1.1, 1.2, 2.2, 3.2

Project 1. Spatial Cognition in Surgical Practice: Exploring the influence, and development, of spatial cognitive processes in laparoscopic skill learning.

Tina Vajsbaher and Holger Schultheis 


Researcher, Tina Vajsbaher, completing a task on a laparoscopic box.

Tina Vajsbaher, completing a task on the laparoscopic box simulator, which is used in the training of surgeons in laparoscopic procedures.

Description: Laparoscopy refers to a minimally invasive surgical technique within the abdominal region, whereby small incisions are made, through which a viewing device (a laparoscope) and other surgical equipment (e.g., graspers) are inserted, and the image is projected onto a 2D monitor display. This procedure requires the surgeon to deal with a series of visual, spatial and psychomotor challenges such as reduction in binocular depth cues, rotated 2D views (monitor) of a 3D environment (inside the body), and the non-intuitive movement caused by the ‘Fulcrum effect’, for example. These challenges make laparoscopy renownedly challenging to acquire, learn and master, and have thus been associated with the long and steep learning curve, compared to conventional open surgery. While the importance of spatial cognitive abilities on laparoscopic skill learning is already well established in the literature, many crucial questions remain to be answered. For one, findings of existing studies have been partly contradictory:  with some studies concluded that spatial cognition as a whole predicts laparoscopic learning, others indicated that only specific underlying spatial processes are important, and still others found spatial processes to only influence the early phase of novices training. Further studies indicated that an average novice surgeon in residency training (Chirurgische Weiterbildung) will not reach skill competency by the end of the training period, while the training outcome may be moderated by initial spatial ability levels: only novices with strong spatial abilities may reach skill competency, while novices with low spatial abilities may never reach competency, even with increased practice. Most importantly, all these findings have been obtained outside the clinical context (using simulators) and, more often than not, by investigating people who are not surgeons. As a result, it is currently unclear to what extent (and which of) the partly disparate findings apply to surgical practice and training in real operating rooms.

Aim: The aim of this project is to provide a mechanistic understanding, and empirical evidence, of the influence and development of spatial cognitive abilities, in relation to technical skill acquisition, throughout a two-year period,  in novice surgeons currently enrolled in surgical residency training (Chirurgische Weiterbildung) at Pius Hospital Oldenburg . This will be achieved in three Phases; Phase I will utilise an online questionnaire in aim to capture the opinions and experiences of laparoscopic surgeons of all seniority levels, and clinics, around Germany. The questionnaire will be distributed in partnership with Berufsverband der Deutschen Chirurgen (BDC) and Deutsche Gesellschaft für Chirurgie (DGCH) (WP 1.1) . Phase II will identify and measure spatial cognitive abilities in senior surgeons (Chefarzt and Oberarzt from Pius Hospital Oldenburg and Klinikum Bremen-Mitte) in aim to devise an expertise and proficiency classification criterion and novices learning objective criterion (WP 3.2) . Finally, Phase III will identify, measure, and track the novice surgeons’ spatial abilities, alongside their intra-operative laparoscopic performance (technical skill acquisition), throughout a 24-month period.

Clinical collaborators of the project:


Project 2. Psychoacoustic auditory display for surgical navigation

Tim Ziemer and Holger Schultheis 

Description: For complicated surgeries pre-operative imaging and planning is common practice. Based on CT or MRI scans, three-dimensional CAD models of the patient’s anatomy are created. Here, different structures are segmented and often augmented by means of coloration or the option to make certain structures transparent or invisible. Target points or trajectories for tools are planned and visualized within this environment. Often, surgical tools need to be navigated with a precision of millimeters, e.g., to resect or ablate tumors completely while avoiding critical structures. To achieve this in cases with limited visual landmarks, especially in minimally invasive surgeries, real-time tracking of surgical tools is applied. The surgical tool is then visualized relative to the patient’s anatomy in the CAD model. Such a purely visual approach has several limitations:

  • Computational costs are relatively high, causing remarkable latencies
  • Monitor placement is restricted to the surgeon’s line of sight and field of view
  • Visual attention has to switch between patient and monitor
  • Lateral or elevated monitors are unergonomic: they can cause tension in the neck
  • Cognitive demands are high: surgeons have to scale, rotate and translate the visualization mentally from the monitor to ego perspective and extract three-dimensional spatial constellations from a two-dimensional monitor
  • Surgeries are already visually demanding: adding visualizations carries the risk of visual overload
  • Head-mounted displays avoid the monitor placement difficulty but carry the risk of simulator sickness

Aim: The aim of the psychoacoustic auditory display is to communicate navigation information via sound to overcome limitations of pure visualizations. Potential interventions that could benefit from the psychoacoustic auditory display include:

  • Needle placement for ablation and biopsy
  • Bone drilling for craniotomy
  • Marking of and cutting along trajectories for tumor resection
  • Placing of electrodes in cochlea implants
  • Surgical training

The auditory display can complement or, in the long run, even replace visualization for surgical navigation purposes. To communicate multidimensional spatial information, it is necessary that orthogonal input data is mapped to aspects of sound which are orthogonal in auditory perception as well. This is a challenging task, because physical audio parameters tend to affect many perceptual auditory qualities. An elaborate technical implementation of auditory perception – i.e., principles of psychoacoustics and auditory scene analysis – may master this challenge and provide the surgeon with unambiguous multi-dimensional information. In this project, which contributes to WP 3.2, we gain knowledge about spatial cognition in surgical tasks and about auditory perception in interactive scenarios. Potential benefits of auditory over visual displays are:

  • Omnidirectional hearing vs. limited visual field
  • Hearing around obstacles vs. direct sight lines
  • Pre-attentive awareness in hearing vs. focal attention in vision
  • Hearing interrupts work floe less than switching visual attention between patient and monitor
  • Reaction times to sound are lower than to light
  • Real-time audio can reduce latency to the order of milliseconds
  • Surgeries are already visually demanding, so using the auditory channel may prevent from visual overload
  • Audio bears no risk of simulator sickness in contrast to head-mounted displays


Principle Investigator: Gabriel Zachmann

Project 1. Autonomous Surgical Lamps.

Jörn Teuber and Gabriel Zachmann

Description: We are developing algorithms for the autonomous positioning of surgical lamps in open surgery. These algorithms work solely on the input of one depth camera, which is positioned above the patient during the surgery. The algorithms identify the operation site (aka the situs) and all possible occlusions. They then move the lamps to avoid occlusions and collisions while optimizing for the least amount of movement feasible over time. The basic idea is to take the point cloud given by the depth camera and render it from the perspective of the situs towards the working space of the lamps above the operating table. Out of this rendering, we directly get the information, which parts of the lamps workspace are occluded and which not. To be able to minimize the movement over time, we also use information about past occlusions and movements to position the lamps in areas, that are most likely to not be occluded in the future. We arranged the algorithms in a pipeline, which takes the depth image of the depth camera as input, analyzes it to find the situs, and at last outputs the current optimal positions for a given set of lamps.

Clinical collaborators of the project:

  • Pius Oldenburg Hospital (Department of General and Visceral Surgery) – Clinic director: Priv. Doz. Dr. med. D. Weyhe, Contact person:  Dr. rer. nat. V. Uslar
  • Klinikum Bremen-Mitte (Department of General and Visceral Surgery) -Clinic director: Prof. Dr. med. H. Bektas, Contact person: D. Blaurock
  • Asklepios Klinik Barmbek (Department of General and Visceral Surgery) -Clinic director: Prof. Dr. K. J. Oldhafer