Abstract

During surgery, medical practitioners rely on the mobile C-Arm medical x-ray system (C-Arm) and its fluoroscopic functions to not only perform the surgery but also validate the outcome. Currently, technicians reposition the C-Arm arbitrarily through estimation and guesswork. In cases when the positioning and repositioning of the C-Arm are critical for surgical assessment, uncertainties in the angular position of the C-Arm components hinder surgical performance. This thesis proposes an integrated approach to automatically reposition C-Arms during critically acute movements in orthopedic surgery. Robot vision and control with deep learning are used to determine the necessary angles of rotation for desired C-Arm repositioning. More specifically, a convolutional neural network is trained to detect and classify internal bodily structures. Image generation using the fast Fourier transform and Monte Carlo simulation is included to improve the robustness of the training progression of the neural network. Matching control points between a reference x-ray image and a test x-ray image allows for the determination of the projective transformation relating the images. From the projective transformation matrix, the tilt and orbital angles of rotation of the C-Arm are calculated. Key results indicate that the proposed method is successful in repositioning mobile C-Arms to a desired position within 8.9% error for the tilt and 3.5% error for the orbit. As a result, the guesswork entailed in fine C-Arm repositioning is replaced by a better, more refined method. Ultimately, confidence in C-Arm positioning and repositioning is reinforced, and surgical performance with the C-Arm is improved.

Date of publication

Summer 8-12-2019

Document Type

Thesis

Language

english

Persistent identifier

http://hdl.handle.net/10950/1861

Committee members

Dr. Chung Hyun Goh, Dr. X. Neil Dong, Dr. Fredericka Brown

Degree

Master of Science in Mechanical Engineering

Share

COinS