Extending our Computer Vision

Jun 9, 2020
Michael Barrow

At the CSE Winter Research Open House earlier this year, PhD student Michael Barrow was honored for producing the best industry poster: Data Driven Tissue Models for Surgical Image Guidance. But like much research, getting to this accolade was quite a journey.

Barrow came relatively late to medical technology. In his second year, he was attending a lecture by UC San Diego professor of surgery, Sonia Ramamoorthy, MD, when a light bulb went off.

“She was discussing the future of surgery and some major challenges,” said Barrow. “I found her talk inspiring and approached her afterwards. I told her, I have background in computer vision, which I think might apply to some of these medical problems.”

Computer vision seeks to process digital images more like the brain, which could be a great benefit for surgeons. As procedures have become less invasive, seeing key biological structures has become more challenging. Surgical robots can be incredibly precise, but are often limited by their conventional video cameras.

“If you can’t see the vessels and nerves you're trying to avoid, or the tumor you're trying to remove, that precision can be wasted,” said Barrow.

To solve this problem, Barrow has been working on technologies to help surgeons navigate inside the body. Ryan Kastner, PhD, was intrigued by the project, and invited Barrow to join his research group.

Having convinced Kastner and Ramamoorthy, Barrow began recruiting experts in material science, robotic systems and other disciplines, combining engineering and medicine to move the project forward. They had a lot of problems to solve.

Using computer vision for stationary objects is relatively easy. However, organs don’t keep to one place. The team wanted to use surgical instruments to measure their environment and combine that data with preoperative imaging from MRIs, but they needed to merge those inputs.

“Both the MRI and the video camera are looking at the same thing, let's say the liver,” said Barrow. “But they're looking at it from different angles. Theoretically, it's possible to align those coordinates systems, a process called registration, but it can exceptionally difficult.”

Organ shapes can shift significantly as well, and the image from a preoperative scan might not reflect the reality during a procedure. A surgeon may move the liver to target a specific structure, for example, changing its conformation.

Computationally, this is a challenging problem. For each frame, the team needed to map how the actual biological structures were being displaced compared to the original scan. Getting it right is exceptionally important – an indentation on the liver’s surface could change the tumor’s position inside the organ.

“It's assumed we know how much force has been applied to the surface,” said Barrow. “We have to compute the response to that force to understand where the landmark has moved. We can find blood vessels, and other landmarks, before we build the model, but we have to look at how stiff those are in vivo because, as liver disease progresses, the tissue gets stiffer.”

One solution is an imaging approach called MR Elastography, which provides stiffness maps for organs and other structures, but the process is expensive. Another approach is using a modified materials characterization machine to gauge organ stiffness and build that information into the model.

“We can essentially build a model of the liver on the fly right at the start of the procedure,” said Barrow. “We squeeze it in various locations to build a stiffness map and develop the simulation from these samples. We presented our results to the radiology department and liver surgeons, and they are very excited about it.”