Archiv für den Autor: David Black

Auditory Display for Fluorescence-guided Brain Tumor Surgery

Auditory Display for Fluorescence-guided Brain Tumor Surgery. David Black, Horst Hahn, Ron Kikinis, Karin Wårdell, Neda Haj-Hosseini. (2017) In International Journal of Computer Assisted Radiology and Surgery (accepted September 2017)

Abstract:

Protoporphyrin (PpIX) fluorescence allows discrimination of tumor and normal brain tissue during neurosurgery. A hand-held fluorescence (HHF) probe can be used for spectroscopic measurement of 5-ALA-induced PpIX to enable objective detection compared to visual evaluation of fluorescence. However, current technology requires that the surgeon either views the measured values on a screen or employs an assistant to verbally relay the values. An auditory feedback system was developed and evaluated for communicating measured fluorescence intensity values directly to the surgeon.

The auditory display was programmed to map the values measured by the HHF probe to the playback of tones that represented three fluorescence intensity ranges and one error signal. Ten persons with no previous knowledge of the application took part in a laboratory evaluation. After a brief training period, participants performed measurements on a tray of 96 wells of liquid fluorescence phantom and verbally stated the perceived measurement values for each well. The latency and accuracy of the participants’ verbal responses were recorded, and long-term memorization of sound function was evaluated after 7-12 days.

The participants identified the played tone accurately for 98% of measurements after training. The median response time to verbally identify the played tones was 2 pulses. No correlation was found between the latency and accuracy of the responses and no significant correlation with the musical proficiency of the participants was observed on the function responses.

The employed auditory display was shown to be intuitive, easy to learn and remember, fast to recognize, and accurate in providing users with measurements of fluorescence intensity or error signal. The results of this work establish a basis for implementing and further evaluating auditory displays in clinical scenarios involving fluorescence guidance and other areas for which categorized auditory display could be useful.

Mixed Reality Navigation for Laparoscopic Surgery

Brian Xavier, Franklin King, Ahmed Hosny, David Black, Steve Pieper, Jagadeesan Jayender

The role of mixed reality that combines augmented and virtual reality in the healthcare industry, specifically in modern surgical interventions, has yet to be established. In laparoscopic surgeries, precision navigation with real-time feedback of distances from sensitive structures such as the pulmonary vessels is critical to preventing complications. Combining video-assistance with newer navigational technologies to improve outcomes in a simple, cost-effective approach is a constant challenge.

This study aimed to design and validate a novel mixed reality intra-operative surgical navigation environment using a standard model of laparoscopic surgery. We modified an Oculus Rift with two front-facing cameras to receive images and data from 3D Slicer and conducted trials with a standardized Ethicon TASKit surgical skills trainer.

Participants were enrolled and stratified based on surgical experience including residents, fellows, and attending surgeons. Using the TASKit box trainer, participants were asked to transfer pegs, identify radiolabeled pegs, and precisely navigate through wire structures. Tasks were repeated and incrementally aided with modalities such as 3D volumetric navigation, audio feedback, and mixed reality. A final task randomized and compared the current standard of laparoscopy with CT guidance with the proposed standard of mixed reality incorporating all additional modalities. Metrics such as success rate, task time, error rate, and user kinematics were recorded to assess learning and efficiency.

Conclusions: A mixed reality surgical environment incorporating real-time video-assistance, navigational, and radiologic data with audio feedback has been created for the purpose of better enabling laparoscopic surgical navigation with early validations demonstrating potential use cases.

Auditory Display for Improving Free-hand Gesture Interaction

David Black, Bastian Ganze, Julian Hettig, Christian Hansen

Abstract

Free-hand gesture recognition technologies allow touchless interaction with a range of applications. However, touchless interaction concepts usually only provide primary, visual feedback on a screen. The lack of secondary tactile feedback, such as that of pressing a key or clicking a mouse, in interaction with free-hand gestures is one reason that such techniques have not been adopted as a standard means of input. This work explores the use of auditory display to improve free-hand gestures. Gestures using a Leap motion controller were augmented with auditory icons and continuous, model-based sonification. Three concepts were generated and evaluated using a sphere-selection task and a video frame selection task. The user experience of the participants was evaluated using NASA TLX and QUESI questionnaires. Results show that the combination of auditory and visual display outperform both purely auditory and purely visual displays in terms of subjective workload and performance measures.

Auditory Display for Ultrasound Scan Completion

Clinicians manually acquire sequences of 2D ultrasound images to evaluate the local situs in real-time. 3D volumes reconstructed from these sequences give clinicans a spatial overview of the area. Although 3D renderings are beneficial, drawbacks prohibit efficient interaction during acquisition. Current 2D image acquisition methods provide only one audible beep after each 2D scan added to the 3D volume, leaving the clinician without feedback about scan quality. This produces highly inhomogenous intensities of the anatomical structure with imaging artifacts, resulting in overexposed images and reduced image quality. Low-quality volumes must be reacquired, causing clinician frustration and wasted operation time. Auditory display maps information to parameters of sound synthesizers so a user can “hear” underlying data. This has been investigated to guide instruments or warn when clinicians approach risks, aiding clinicians to focus on the situs while still receiving information.

We harness auditory display for acquiring complete, high-quality scans. Our auditory display employs a granular synthesizer with 9 simultaneous sawtooth oscillators. An array with 100 cells represents an ultrasound volume, for which each cell represents one scan, with values ranging from 0 to 100 as the completeness of each individual scan. The synthesizer maps completeness of the current and 8 neighboring cells to pitch, pitch variation, noisiness, low-pass filter rolloff frequency, and stereo width of 9 grains. The synthesizer mimics a vacuum sucking up dust: incomplete areas are heard as scattered, noisier, higher pitch, whereas complete areas are stable, less noisy, and lower pitched. Pilot studies show the auditory display allows high-quality, efficient individual and overall scan completion completely without a monitor. Thus, using auditory display to augment US acquisition could ensure higher-quality scans and improve reconstruction while reducing the use of monitors during the procedure and helping clinicians keep their view on the situs.

A Survey of Auditory Display in Image-Guided Interventions

David Black, Christian Hansen, Arya Nabavi, Ron Kikinis, Horst Hahn. In International Journal of Computer Assisted Radiology and Surgery (accepted February 2017)

This article investigates the current state of the art of the use of auditory display in image-guided medical interventions. Auditory display is a means of conveying information using sound, and we review the use of this approach to support navigated interventions. We discuss the benefits and drawbacks of published systems and outline directions for future investigation.

We undertook a review of scientific articles on the topic of auditory rendering in image- guided intervention. This includes methods for avoidance of risk structures and instrument placement and manipulation. The review did not include auditory display for status monitoring, for instance in anesthesia.

We identified 14 publications in the course of the search. Most of the literature 62% investigates the use of auditory display to convey distance of a tracked instrument to an object using proximity or safety margins. The remainder discuss continuous guidance for navigated instrument placement. Four of the articles present clinical evaluations, 9 present laboratory evaluations, and 3 present informal evaluation (3 present both laboratory and clinical evaluations).

In summary, auditory display is a growing field that has been largely neglected in research in image-guided intervention. Despite benefits of auditory displays reported in both the reviewed literature and non-medical fields, adoption in medicine has been slow. Future challenges include increasing interdisciplinary cooperation with auditory display investigators to develop more meaningful auditory display designs and comprehensive evaluations which target the benefits and drawbacks of auditory display in image guidance.

Comparison of Auditory Display Methods for Elevation Change in Three-Dimensional Tracked Surgical Tool

David Black, Rocío Lopez-Velasco, Horst Hahn, Javier Pascau, Ron Kikinis. Computer Assisted Radiology and Surgery, June 2017

In image-guided interventions, screens display information to help the clinician complete a task, such as placing an instrument or avoiding certain structures. Often, clinicians wish to access this information without having to switch views between the operating situs and the navigation screen. To reduce view switches and help clinicians concentrate on the situs, so-called auditory display has been gaining attention as a means of delivering information to clinicians in image-guided interventions. Auditory display has been implemented in image-guided interventions to relay position information from navigation systems to locate target paths in liver resection marking, resect volumes with neuronavigation, and avoid risk structures in cochleostomy. Previous attempts provide primarily simple, non-directional warning signals and still require the use of a navigation screen. Clinical participants in previous attempts requested auditory display that provides directional cues. Our described method allows screen-free navigation of a tracked instrument with auditory display. However, because changes of y-axis instrument movement onto beneficial auditory display parameters has proven to be difficult in previous work, this paper compares two methods of mapping elevation changes onto auditory position parameter – one using pitch comparison between alternating tones, and another with slightly falling and rising frequencies (glissando) for each tone. In this work, we present a pilot study that compares time-to-target as a performance factor to compare these two methods.ComparisonCARS2017

The two auditory display methods are used to relay the position of a tracked instrument using sound. The methods described here relay elevation (changes in the y-axis), azimuth (changes in the x-axis) and distance along the perpendicular trajectory path (z-axis) from the tracked instrument towards a target. Because the methods are suited for applications involving 2D placement plus a 1D depth component, these generalized auditory displays allow tracking during a variety of clinical applications using tracked instruments, including resection path marking, ablation and biopsy needle placement, bone drilling, and endoscopic instrument placement.

Two methods were developed for comparison to relay the position of a tracked instrument using auditory display. Both employ the same mapping for changes in azimuth (x-axis) and depth (z-axis). For changes in elevation (y-axis), the first method employs pitch comparison. A tone with moving pitch between 261 Hz and 1046 Hz is alternated with a reference tone with static pitch of 523 Hz. This alternation allows the user to compare both pitches, bringing one towards another, similar to tuning a guitar or violin string. When the pitch of the moving tone reaches that of the reference tone, the correct elevation is reached. For the glissando (lit. “sliding”) method, only one moving tone is used. When the elevation is positive, the pitches of the tones “slide” down slightly, signaling that the instrument should be lowered. When elevation is negative, the tones “slide” up slightly, signaling that the instrument should be raised. A similar range is used, and the pitch of the tones “slide” ± 3 semitones (ca. ±19%).

For both methods, changes in azimuth are mapped to changes in stereo mapping. The tones described above are played in stereo to indicate whether the target is left or right of the current position. A “sound object” metaphor is employed, for example, when the instrument is to the right of the target, tones are heard in the left ear, indicating that the target is to the left of the listener. Changes in perpendicular distance to the target (z-axis) are mapped linearly to the inter-onset interval (duty cycle) of the tones, similar to an electronic car parking aid. At maximum distance, tones are played 900 ms apart; at the target, this is reduced to 200 ms.

A pilot study was performed with 10 non-expert participants to gauge the usability of each of the methods for elevation mapping. After a short training period with eyes open using a screen to become familiarized with the system, each participant completed two placements of the tracked instrument with eyes closed, i.e., blind placement without screen, for each of the two methods.

For the pitch comparison method, time-to-target averaged 57.1 seconds across all participants. For the glissando method, the time-to-target averaged 23.6 seconds. On a subjective difficulty scale of “low,” “medium,” “high,” and “very high,” half of the participants rated the difficulty of the pitch-comparison method as “low” and half as either “medium,” “high,” or “very high” difficulty. In contrast, all participants rated the glissando method as “low” difficulty. Participants commented on the high mental demand of hearing the reference tone and the task of comparing the two alternating tones.

Although the use of auditory display for image-guided navigation tasks has increased in recent years, previous attempts have primarily provided only basic warning signals, prompting clinicians request directional cues in auditory display. Whereas changes in azimuth can be mapped intuitively to changes in stereo mapping and distance to target can be mapped to inter-onset-interval thanks to participant familiarity with car parking aids, changes in elevation have proved troublesome to design. This pilot study compares two methods for mapping elevation – alternating pitch between two tones (“instrument tuning” metaphor) and sliding pitches for single tones (“glissando”). Results of the study show that the glissando method is promising, leading to reach the target faster and giving lower subjective difficulty rates by participants. Further studies should incorporate additional, refined auditory display methods and evaluate use in real clinical scenarios.

Instrument-Mounted Displays for Reducing Cognitive Load During Surgical Navigation

Surgical navigation systems rely on a monitor placed in the operating room to relay information. Optimal monitor placement can be challenging in crowded rooms, and it is often not possible to place the monitor directly beside the situs. The operator must split attention between the navigation system and the situs. We present an approach for needle-based interventions to provide navigational feedback directly on the instrument and close to the situs by mounting a small display onto the needle.

By mounting a small and lightweight smartwatch display directly onto the instrument we are able to provide navigational guidance close to the situs and directly in the operator’s field of view thereby reducing the need to switch the focus of view between the situs and the navigation system. We devise a specific variant of the established cross-hair metaphor suitable for the very limited screen space. We conduct an empirical user study comparing our approach to using a monitor and a combination of both.

Results from the empirical user study show significant benefits for cognitive load, user preference, and general usability for the instrument-mounted display, while achieving the same level of performance in terms of time and accuracy compared to using a monitor.

We successfully demonstrate the feasibility of our approach and potential bene- fits. With ongoing technological advancements instrument-mounted displays might complement standard monitor setups for surgical navigation in order to lower cognitive demands and for improved usability of such systems.

Auditory Feedback to Support Image-Guided Medical Needle Placement

During medical needle placement using image-guided navigation systems, theclinician must concentrate on a screen. To reduce the clinician’s visual reliance on the screen, this work proposes an auditory feedback method as a stand-alone method or to support visual feedback for placing the navigated medical instrument, in this case a needle.

An auditory synthesis model using pitch comparison and stereo panning parameter mapping was developed to augment or replace visual feedback for navigated needle placement. In contrast to existing approaches which augment but still require a visual display, this method allows view-free needle placement.

Audiovisual feedback shows promising results and establish a basis for applying auditory feedback as a supplement to visual information to other navigated interventions, especially those for which viewing a patient is beneficial or necessary.

 

17.06.2016 – DICOM for Medical Image Computing Research, Invited Talk from Andrey Fedorov, Surgical Planning Laboratory, Harvard

This talk will take place on 17 June 2016 at 15:00 in the Cartesium building (Rotunde) at the University of Bremen.  

Prof. Ron Kikinis would like to cordially invite you to a talk by Prof. Andrey Fedorov.

Medical image computing hold tremendous promise for precision medicine clinical applications. Image post-processing tools for automated image quantitation have critical role in such applications as precision image guidance and disease characterization. Their deployment in the context of clinical research necessitates interoperability with the clinical systems. Comparison with the established outcomes and evaluation tasks motivate integration of the clinical and imaging data, and the use of standardized approaches to support annotation and sharing of the analysis results and semantics. In this talk I will discuss our work in applying Digital Imaging and Communications in Medicine (DICOM) international standard to support these tasks. I will present our approach, on-going work in implementation of the supporting tools, and emerging applications that are motivating our development in cancer treatment response assessment and image-guided therapy.

23.02.2016 – Human-Computer Interaction in the Operating Room: Solutions and Future Challenges, Invited Talk from Christian Hansen, University of Magdeburg

This talk will take place on 23 February at 16:00 in the Cartesium building (Rotunde) at the University of Bremen.  Please also see this link at the Faculty of Computer Science.

Die Bedienung medizinischer Software in Operationssälen stellt häufig eine große Herausforderung für Operateure und medizinisches Personal dar. Zentrale Informationen, wie präoperative Bild- und Planungsdaten eines Patienten, sind während eines Eingriffes zwar verfügbar, werden jedoch oftmals nicht OP-tauglich dargeboten. In dem Vortrag werden aktuelle Lösungskonzepte und zukünftige Herausforderungen für die Mensch-Maschine Interaktion in der Chirurgie und der interventionellen Radiologie diskutiert. Dabei werden neue Bedienkonzepte vorgestellt, welche in der Arbeitsgruppe „Computerassistierte Chirurgie“ in Magdeburg erforscht und klinisch evaluiert werden.

Biography:

Jun.-Prof. Dr. Christian Hansen (Jahrgang 1980)

2000 – 2006 Studium der Computervisualistik mit Anwendungsfach Medizin an der Otto-von-Guericke-Universität Magdeburg

2006 – 2013 Wissenschaftlicher Mitarbeiter am Fraunhofer Institut für Bildgestützte Medizin (MEVIS) in Bremen

2012 Promotion an der Jacobs Universität in Bremen im Bereich medizinische Informatik

Seit 2013 Juniorprofessur für Computerassistierte Chirurgie an der Otto-von-Guericke-Universität Magdeburg

Seit 2015 Leitung der Forschungsgruppe „Therapieplanung und Navigation“ am For-schungscampus STIMULATE in Magdeburg