Augmented Reality Surgical Navigation

System Patents

As one of the earliest companies in this field, Novarad has an extensive intellectual property portfolio that prevents other vendors from using key AR functionality in their products.

X-ray Registration with Image Visible Markers

  • Patent # 11,948,265

Aligning Image Data of a Patient with Actual Views of the Patient Using an Optical Code Affixed to the Patient

  • Patent # 10,825,563

Augmented Reality Viewing and Tagging for Medical Procedures

  • Patent #10,010,379

Augmenting Real-Time Views of a Patient with Three-Dimensional Data

  • Patent # 9,892,564

  • Patent # 10,475,244

  • Patent # 11,004,271

Using optical codes with augmented reality displays

  • Patent # 11,287,874          

Augmented reality viewing and tagging for medical procedures

  •  Patent # 11,266,480          

Alignment of medical images in augmented reality displays

  •  Patent # 11,237,627          

Augmented reality viewing and tagging for medical procedures

  •  Patent # 10,945,807

Patent Summaries

Independent Claims:

  1. A method for augmenting real-time views of a patient with three-dimensional (3D) data, the method comprising:

identifying 3D data for a patient, the 3D data including an outer layer of the patient and multiple inner layers of the patient;

determining virtual morphometric measurements, including size and shape, of the outer layer of the patient from the 3D data;

mapping a 3D space;

registering a real-time position of the outer layer of the patient in the 3D space by mapping the position of the outer layer of the patient within the 3D space;

determining real-time morphometric measurements, including size and shape, of the outer layer of the patient;

automatically registering a virtual position of the outer layer of the patient from the 3D data to align with the registered real-time position of the outer layer of the patient in the 3D space using the virtual morphometric measurements and using the real-time morphometric measurements; and

displaying, in an augmented reality (AR) headset, one of the inner layers of the patient from the 3D data projected onto real-time views of the outer layer of the patient, the projected inner layer of the patient from the 3D data being confined within a volume of a virtual spatial difference box, the real-time views being non-image actual views.

Patent Number: 9,892,564

Title: AUGMENTING REAL-TIME VIEWS OF A PATIENT WITH THREE-DIMENSIONAL DATA

  1. A method for augmenting real-time views of a patient with three-dimensional (3D) data, the method comprising:

identifying 3D data for a patient, the 3D data including an outer layer of the patient and multiple inner layers of the patient;

determining virtual morphometric measurements, including size and shape, of the outer layer of the patient from the 3D data;

mapping a 3D space;

registering a real-time position of the outer layer of the patient in the 3D space by mapping the position of the outer layer of the patient within the 3D space;

determining real-time morphometric measurements, including size and shape, of the outer layer of the patient;

positioning extra-visual markers on the patient;

automatically registering a virtual position of the outer layer of the patient from the 3D data to align with the registered real-time position of the outer layer of the patient in the 3D space using the virtual morphometric measurements and using the real-time morphometric measurements;

automatically registering the real-time positions of the extra-visual markers with respect to the registered real-time position of the outer layer of the patient in the 3D space;

displaying, in an augmented reality (AR) headset, one of the inner layers of the patient from the 3D data projected onto real-time views of the outer layer of the patient, the real-time views being non-image actual views;

covering at least a portion of the patient and the extra-visual markers with an opaque surgical draping;

sensing, using a sensor of the AR headset, the real-time positions of the extra-visual markers underneath the opaque surgical draping; and

automatically re-registering a virtual position of the outer layer of the patient from the 3D data to align with the sensed real-time positions of the extra-visual markers.

Patent Number: 9,892,564

Title: AUGMENTING REAL-TIME VIEWS OF A PATIENT WITH THREE-DIMENSIONAL DATA

  1. A method for augmenting real-time views of a patient with three-dimensional (3D) data, the method comprising:

identifying 3D data for a patient, the 3D data including an outer layer of the patient and multiple inner layers of the patient;

mapping a 3D space;

registering a real-time position of the outer layer of the patient in the 3D space by mapping the position of the outer layer of the patient within the 3D space;

registering a virtual position of the outer layer of the patient from the 3D data to align with a real-time position of the outer layer of the patient in a 3D space;

displaying, in an augmented reality (AR) headset, one of the inner layers of the patient from the 3D data projected onto real-time views of the outer layer of the patient, the projected inner layer of the patient from the 3D data being confined within a volume of a virtual spatial difference box, the real-time views being non-image actual views;

determining real-time morphometric measurements, including size and shape, of an object prior to insertion of the object into the patient through the outer layer of the patient;

automatically tracking a real-time position of the object in the 3D space with respect to the registered real-time position of the outer layer of the patient in the 3D space and with respect to the registered virtual position of the outer layer of the patient from the 3D data; and

while a portion of the object is inserted into the patient through the outer layer of the patient, displaying, in the AR headset, a virtual portion of the object projected into the projected inner layer of the patient from the 3D data.

Patent Number: 9,892,564

Title: AUGMENTING REAL-TIME VIEWS OF A PATIENT WITH THREE-DIMENSIONAL DATA

  1. A method for augmenting real-time views of a patient with three-dimensional (3D) data, the method comprising:

identifying 3D data for a patient, the 3D data including an outer layer of the patient and multiple inner layers of the patient;

displaying, in an augmented reality (AR) headset, one of the inner layers of the patient from the 3D data projected onto real-time views of the outer layer of the patient, the projected inner layer of the patient from the 3D data being confined within a volume of a virtual spatial difference box, the real-time views being non-image actual views;

generating, in the AR headset, a virtual user interface that includes options for altering the display of the projected inner layer of the patient from the 3D data;

displaying, in the AR headset, the virtual user interface projected onto real-time views due to a focal orientation of the AR headset not being focused on the patient; and

hiding, in the AR headset, the virtual user interface due to the focal orientation of the AR headset being focused on the patient.

Patent Number: 9,892,564

Title: AUGMENTING REAL-TIME VIEWS OF A PATIENT WITH THREE-DIMENSIONAL DATA

  1. A method for augmenting real-time, non-image actual views of a patient with three-dimensional (3D) data, the method comprising:

identifying 3D data for the patient, the 3D data including an outer layer of the patient and multiple inner layers of the patient; and

displaying, in an augmented reality (AR) headset, one of the inner layers of the patient from the 3D data projected onto real-time, non-image actual views of the outer layer of the patient, the projected inner layer of the patient from the 3D data being confined within a volume of a virtual 3D shape.

Patent Number: 10,475,244

Title: AUGMENTING REAL-TIME VIEWS OF A PATIENT WITH THREE-DIMENSIONAL DATA

  1. A method for augmenting real-time, non-image actual views of a patient with three-dimensional (3D) data, the method comprising:

identifying 3D data for the patient, the 3D data including an outer layer of the patient and multiple inner layers of the patient;

displaying, in an augmented reality (AR) headset, one of the inner layers of the patient from the 3D data projected onto real-time, non-image actual views of the outer layer of the patient;

automatically tracking a real-time position of a medical instrument with respect to a real-time position of the outer layer of the patient, including an inserted portion of the medical instrument that is inserted into the patient through the outer layer of the patient and including a visible portion of the medical instrument that is not inserted into the patient through the outer layer of the patient; and

while the inserted portion of the medical instrument is inserted into the patient through the outer layer of the patient, displaying, in the AR headset, a virtual inserted portion of the medical instrument projected into the projected inner layer of the patient while the visible portion of the medical instrument is viewed as a non-image actual view.

Patent Number: 11,004,271

  1. A method for augmenting real-time, non-image actual views of a patient with three-dimensional (3D) data, the method comprising:

identifying 3D data for the patient, the 3D data including an outer layer of the patient and multiple inner layers of the patient, the multiple inner layers of the patient having an original color gradient;

altering the original color gradient of the multiple inner layers to be lighter than the original color gradient in order to be better visible when projected onto real-time, non-image actual views of the outer layer of the patient; and

displaying, in an augmented reality (AR) headset, one of the inner layers of the patient from the 3D data projected onto real-time, non-image actual views of the outer layer of the patient, the projected inner layer of the patient from the 3D data being having the altered color gradient.

Patent Number: 11,004,271

  1. A method for augmenting real-time, non-image actual views of a patient with three-dimensional (3D) data, the method comprising:

identifying 3D data for a patient, the 3D data including an outer layer of the patient and multiple inner layers of the patient;

displaying, in an augmented reality (AR) headset, one of the inner layers of the patient from the 3D data projected onto real-time, non-image actual views of the outer layer of the patient;

generating, in the AR headset, a virtual cursor and/or a virtual user interface that includes options for altering the display of the projected inner layer of the patient from the 3D data;

displaying, in the AR headset, the virtual cursor and/or the virtual user interface projected onto real-time, non-image actual views when it is determined that a focal orientation of the AR headset is focused elsewhere than on the patient; and

hiding, in the AR headset, the virtual cursor and/or the virtual user interface when it is determined that the focal orientation of the AR headset is focused on the patient.

Patent Number: 11,004,271

  1. A method for augmenting real-time, non-image actual views of a patient with three-dimensional (3D) data, the method comprising:

identifying 3D data for a patient, the 3D data including an outer layer of the patient and multiple inner layers of the patient;

displaying, in an augmented reality (AR) headset, one of the inner layers of the patient from the 3D data projected onto real-time, non-image actual views of the outer layer of the patient;

generating, in the AR headset, a virtual user interface that includes options for altering the display of the projected inner layer of the patient from the 3D data;

displaying, in the AR headset, the virtual user interface projected onto real-time, non-image actual views;

determining, in real-time, a distance of the patient from the AR headset; and

updating, in real-time, the displaying of the virtual user interface, in the AR headset, to cause the virtual user interface to be continually positioned at a focal distance from the AR headset that is about equal to the real-time distance of the patient from the AR headset.

Patent Number: 10,475,244

Title: AUGMENTING REAL-TIME VIEWS OF A PATIENT WITH THREE-DIMENSIONAL DATA

  1. A method for augmenting medical imaging of a patient during a medical procedure, the medical imaging displayed using an augmented reality headset worn by a medical professional during the medical procedure, the method comprising:

receiving a visual image of patient anatomy captured by a visual image camera during the medical procedure, the visual image comprising a viewable portion of the patient anatomy obtained during the medical procedure;

retrieving an acquired medical image associated with the patient anatomy from data storage, the acquired medical image comprising imaging acquired of one or more anatomical structures at a plurality of anatomical layers of the patient anatomy;

associating the acquired medical image to align with the viewable portion of the patient anatomy captured by the visual image camera, wherein the one or more anatomical structures of the medical imaging at the plurality of layers are aligned with the visual image of the patient anatomy;

retrieving an augmentation tag from data storage, the augmentation tag associated with a location in one layer of the acquired medical image, the augmentation tag identifying at least one anatomical structure of the acquired medical image found at the location, the augmentation tag further comprising a shape of the anatomical structure; and

projecting, during the medical procedure, the acquired medical image and the augmentation tag using the augmented reality headset to form a single graphical view as an overlay to the patient anatomy viewable through a lens of the augmented reality headset.

  1. A non-transitory machine readable storage medium having instructions embodied thereon, the instructions when executed cause a processor to augment medical imaging of a patient during a medical procedure using an augmented reality headset worn by a medical professional during the medical procedure, comprising:

receiving a visual image of patient anatomy captured by a visual image camera during the medical procedure, the visual image comprising a viewable portion of the patient anatomy obtained during the medical procedure;

identifying a patient marker in the visual image of the patient anatomy, the patient marker comprising information identifying the patient, information identifying patient anatomy that is the subject of the medical procedure, a patient orientation marker, or an image inversion prevention tag;

retrieving an acquired medical image associated with the patient anatomy from a data store based in part on the patient marker, the acquired medical image comprising imaging acquired of one or more anatomical structures at a plurality of anatomical layers of the patient anatomy;

anchoring the acquired medical image to the patient anatomy based in part on the patient orientation marker;

retrieving an augmentation tag from data storage, the augmentation tag associated with the patient marker and a location in the acquired medical image, the augmentation tag identifying at least one anatomical structure of the acquired medical image found at the location, the augmentation tag further comprising a shape of the anatomical structure; and

projecting, during the medical procedure, the acquired medical image and the augmentation tag onto lenses in an augmented reality headset to form a single graphical view which is overlaid on the patient anatomy viewable by the medical professional through the lenses of the augmented reality headset.

 

  1. A system for augmenting a view of patent anatomy during a medical procedure for a medical professional using an augmented reality headset, comprising:

a camera configured to obtain images of the patient anatomy during the medical procedure, the images comprising the view of the patient anatomy;

an augmentation processor in communication with the camera and configured to:

capture morphometric measurements of the patient anatomy from the images captured by the camera during the medical procedure;

identify a patient marker in the images captured by the camera of the patient anatomy, the patient marker comprising information identifying the patient in order to retrieve pre-measured morphometric measurements;

retrieving pre-measured morphometric measurements associated with the patient anatomy from data storage using the patient marker identified in the images captured by the camera;

determine whether the morphometric measurements of the patent anatomy captured from the image match the pre-measured morphometric measurements associated with the patient anatomy as retrieved using the patient marker;

retrieving an acquired medical image associated with the patient anatomy as defined by the patient marker and matched morphometric measurements from data storage, the acquired medical image comprising imaging acquired of one or more anatomical structures at a plurality of anatomical layers of the patient anatomy;

aligning the acquired medical image with the view provided by the augmented reality headset during the medical procedure of the patient anatomy using the morphometric measurements; and

form a single graphical view with the acquired medical image and an augmentation tag, the augmentation tag identifying at least one anatomical structure of the acquired medical image found at the location, the augmentation tag further comprising a shape of the anatomical structure; and

an augmented reality headset and in communication with the augmentation processor and configured to:

project the single graphical view formed from the acquired medical image and the augmentation tag onto lenses overlaid on the view of the patient anatomy during the medical procedure; and

provide a notification the acquired medical image matches the patient anatomy.

Patent Number: 10,010,379

Title: Augmented Reality Viewing and Tagging For Medical Procedures

  1. A method for augmenting medical imaging of a patient, the medical imaging displayed using an augmented reality headset worn by a medical professional, the method comprising:

receiving a visual image of patient anatomy captured by a visual image camera, the visual image comprising a viewable portion of the patient anatomy;

retrieving an acquired medical image associated with the patient anatomy from data storage, the acquired medical image comprising imaging acquired of one or more anatomical structures at a plurality of anatomical layers of the patient anatomy;

associating the acquired medical image to align with the viewable portion of the patient anatomy captured by the visual image camera, wherein the one or more anatomical structures of the medical imaging at the plurality of layers are aligned with the visual image of the patient anatomy;

retrieving an augmentation tag from data storage, the augmentation tag associated with a location in one layer of the acquired medical image, the augmentation tag identifying at least one anatomical structure of the acquired medical image found at the location; and

projecting the acquired medical image and the augmentation tag using the augmented reality headset to form a single graphical view as an overlay to the patient anatomy viewable through a lens of the augmented reality headset.

 

  1. A non-transitory machine readable storage medium having instructions embodied thereon, the instructions when executed cause a processor to augment medical imaging of a patient during a medical procedure using an augmented reality headset worn by a medical professional during the medical procedure, comprising:

receiving a visual image of patient anatomy captured by a visual image camera during the medical procedure, the visual image comprising a viewable portion of the patient anatomy obtained during the medical procedure;

identifying a patient marker in the visual image of the patient anatomy, the patient marker comprising information identifying the patient, information identifying patient anatomy that is the subject of the medical procedure, a patient orientation marker, or an image inversion prevention tag;

retrieving an acquired medical image associated with the patient anatomy from a data store based in part on the patient marker, the acquired medical image comprising imaging acquired of one or more anatomical structures at a plurality of anatomical layers of the patient anatomy;

anchoring the acquired medical image to the patient anatomy based in part on the patient orientation marker;

retrieving an augmentation tag from data storage, the augmentation tag associated with the patient marker and a location in the acquired medical image, the augmentation tag identifying at least one anatomical structure of the acquired medical image found at the location; and

projecting, during the medical procedure, the acquired medical image and the augmentation tag onto lenses in an augmented reality headset to form a single graphical view which is overlaid on the patient anatomy viewable by the medical professional through the lenses.

Patent Number: 10,945,807

Title: Augmented Reality Viewing and Tagging For Medical Procedures

  1. A system for augmenting a view of patent anatomy during a medical procedure for a medical professional using an augmented reality headset, comprising:

a camera configured to obtain images of the patient anatomy during the medical procedure, the images comprising the view of the patient anatomy;

an augmentation processor in communication with the camera and configured to:

capture morphometric measurements of the patient anatomy from the images captured by the camera during the medical procedure;

identify a patient marker in the images captured by the camera of the patient anatomy, the patient marker comprising information identifying the patient in order to retrieve pre-measured morphometric measurements;

retrieving pre-measured morphometric measurements associated with the patient anatomy from data storage using the patient marker identified in the images captured by the camera;

determine whether the morphometric measurements of the patent anatomy captured from the image match the pre-measured morphometric measurements associated with the patient anatomy as retrieved using the patient marker;

retrieving an acquired medical image associated with the patient anatomy as defined by the patient marker and matched morphometric measurements from data storage, the acquired medical image comprising imaging acquired of one or more anatomical structures at a plurality of anatomical layers of the patient anatomy;

aligning the acquired medical image with the view provided by the augmented reality headset during the medical procedure of the patient anatomy using the morphometric measurements; and

form a single graphical view with the acquired medical image and an augmentation tag, the augmentation tag identifying at least one anatomical structure of the acquired medical image found at a location; and

an augmented reality headset and in communication with the augmentation processor and configured to:

project the single graphical view formed from the acquired medical image and the augmentation tag onto lenses overlaid on the view of the patient anatomy during the medical procedure; and

provide a notification the acquired medical image matches the patient anatomy.

Patent Number: 10,945,807

Title: Augmented Reality Viewing and Tagging For Medical Procedures

Independent Claims:

  1. A method for aligning image data of a patient with actual views of the patient using an optical code affixed to the patient, the method comprising:

affixing an optical code to a patient, the optical code being perceptible to an optical sensor;

affixing a pattern of markers to the patient in a fixed position relative to a position of the optical code, the pattern of markers being perceptible to a non-optical imaging modality;

capturing image data of the patient using the non-optical imaging modality, the image data including an inner layer of the patient, the image data further including the pattern of markers in a fixed position relative to a position of the inner layer of the patient;

sensing, with an optical sensor of an augmented reality (AR) headset, the optical code affixed to the patient and a position of the optical code in a 3D space;

accessing the image data, the accessing being performed by the AR headset accessing a network computer where the image data is stored;

calculating, based on the sensed position of the optical code in the 3D space and the fixed position of the pattern of markers relative to the position of the optical code, the position of the pattern of markers in the 3D space;

registering, based on the calculated position of the pattern of markers in the 3D space and the fixed position in the image data of the pattern of markers relative to the position of the inner layer of the patient, the position of the inner layer of the patient in the 3D space by aligning the calculated position of the pattern of markers in the 3D space with the position of the pattern of markers in the image data, the registering being performed by the AR headset; and

displaying in real-time, in the AR headset and based on the registering, the inner layer of the patient from the image data projected onto actual views of the patient.

 

Optical Codes with a Bandage

 

  1. A method for aligning image data of a patient with actual views of the patient using an optical code affixed to the patient, the method comprising:

affixing a bandage to a patient, the bandage including an optical code printed thereon and a pattern of markers affixed thereto, the pattern of markers having a fixed position in the bandage relative to a position of the optical code on the bandage, the optical code being perceptible to an optical sensor, the pattern of markers being perceptible to a non-optical imaging modality;

capturing image data of the patient using the non-optical imaging modality, the image data including an inner layer of the patient, the image data further including the pattern of markers in a fixed position relative to a position of the inner layer of the patient;

sensing, with an optical sensor of an augmented reality (AR) headset, the optical code affixed to the patient and a position of the optical code in a 3D space;

accessing the image data, the accessing being performed by the AR headset accessing a network computer where the image data is stored;

calculating, based on the sensed position of the optical code in the 3D space and the fixed position of the pattern of markers relative in the bandage to the position of the optical code on the bandage, the position of the pattern of markers in the 3D space;

registering, based on the calculated position of the pattern of markers in the 3D space and the fixed position in the image data of the pattern of markers relative to the position of the inner layer of the patient, the position of the inner layer of the patient in the 3D space by aligning the calculated position of the pattern of markers in the 3D space with the position of the pattern of markers in the image data, the registering being performed by the AR headset; and

displaying in real-time, in the AR headset and based on the registering, the inner layer of the patient from the image data projected onto actual views of the patient.

Patent Number: 10,825,563

Optical codes with tubing or container for auto registration of images

 

  1. A method for using an augmented reality (AR) headset to co-localize an image data set with a body of a person, comprising:

identifying one or more optical codes associated with the body of the person using an optical sensor of the AR headset, wherein an optical code for a body of the person has an image visible marker that includes an image visible contrast medium in tubing or an enclosed container in a fixed position relative to the optical code;

identifying the image visible marker with the image visible contrast medium in the tubing in the image data set acquired using a medical imaging device; and

aligning the image data set with the body of the person using one or more optical codes on the body of the person as viewed through the AR headset and using the fixed position of the image visible marker with the image visible contrast medium in the tubing with respect to the optical code as referenced to a representation of the image visible marker with the image visible contrast medium in the tubing in the image data set.

 

Optical codes and an image visible marker for holographic size calibration

 

  1. A method for using an augmented reality (AR) headset to co-localize an image data set with a body of a person, comprising:

detecting visual image data of a portion of the body of the person using an optical sensor of the AR headset;

identifying one or more optical codes associated with the body of the person, wherein an optical code for a body of the person has an image visible marker which is located in a fixed position relative to the optical code;

identifying a known size of a geometric attribute of the image visible marker associated with the body of the person, wherein the image visible marker appears in the image data set;

comparing a measured size of the geometric attribute of the image visible marker in the image data set to the known size of a geometric attribute of an image visible marker to determine a computed difference in size between a measured size and the known size of the geometric attribute of the image visible marker in the image data set; and

modifying a scaling of the image data set, to be aligned with the body of the person using one or more optical codes on the body of the person as viewed through the AR headset and the image visible marker, based at least in part on the computed difference in size between the known size and measured size of the geometric attribute of the image visible marker.

 

 

Optical codes and center point calculation to improve registration accuracy

 

  1. A method for using an augmented reality (AR) headset to co-localize an image data set with a body of a person, comprising:

identifying one or more optical codes associated with the body of the person using an optical sensor of the AR headset, wherein an optical code for a body of the person is located in a fixed position relative to an image visible marker;

identifying one or more edges of an optical code associated with the body of the person;

identifying a center point of the optical code using the one or more edges of the optical code; and

aligning the image data set with the body of the person using the center point of the optical code and the image visible code.

 

  1. A method for using an augmented reality (AR) headset to co-localize an image data set with a body of a person, comprising:

identifying a plurality of optical codes associated with the body of the person using an optical sensor of the AR headset, wherein the optical codes for a body of the person are located in a fixed position relative to an image visible marker;

identifying one or more edges of the plurality of optical codes associated with the body of the person; finding the center points of a plurality of optical codes using the one or more edges of the optical codes;

calculating an average center point of the center points of the plurality of optical codes; and

aligning the image data set with a body of a person using the average center point of the plurality of optical codes and image visible markers.

Patent Number: 11,237,627

Summary: The ‘874 patent is directed toward determining a position of a medical implement with respect to a body of the person and an image data set using optical codes. A method is also described for aligning a fluoroscopic image and image data set with a body of a person using optical codes on the fluoroscopic device and the person. Another method describes aligning fluoroscopic images to the person using the position and orientation of the fluoroscopic device determined using optical codes on the fluoroscopic device. In another aspect, the position and orientation of an ultrasonic transducer is determined using optical codes to enable an ultrasound image to be aligned with a person using an AR display. Yet another method describes confirming the correct portion of a body and correct medical implement are in a procedure using optical codes.

 

Independent Claims:

 

Referencing position of medical implement to person and image data set

 

  1. A method for using an augmented reality (AR) headset to co-localize an image data set and a medical implement, comprising:

detecting visual image data of a portion of a body of a person and the medical implement using an optical sensor of the AR headset;

identifying one or more optical codes associated with the body of the person and the medical implement, wherein an optical code on a body of the person is located in a fixed position relative to an image visible marker;

confirming that the medical implement is assigned to be used in a medical procedure using the one or more optical codes;

aligning the image data set with the body of the person using the one or more optical codes on the body of the person as viewed through the AR headset and using the fixed position of the image visible marker with respect to the optical code as referenced to a representation of the image visible marker in the image data set; and

determining a position of the medical implement with respect to the body of the person using the one or more optical codes on the medical implement and the body of the person to enable the medical implement to be referenced to the image data set and the body of the person, as viewed through the AR headset.

 

AR, optical codes and fluoroscopy with image data set

 

  1. A method for using an augmented reality (AR) display to align a fluoroscopic image with respect to a body of a person and an image projection from an image data set, comprising:

detecting visual image data of a portion of the body of a person and a fluoroscopic device, which is mobile with respect to the body of the person, using an optical sensor of the AR display;

identifying one or more optical codes on the body of the person and on the fluoroscopic device, wherein one or more optical codes on the body of the person have a fixed position relative to an image visible marker;

aligning an image data set of the body of the person using the fixed position of the image visible marker relative to the one or more optical codes on the body of the person as viewed through the AR display;

determining a position and orientation of the fluoroscopic device with respect to the body of the person using the one or more optical codes on the fluoroscopic device;

generating an image projection of the image data set based in part on the position and orientation of the fluoroscopic device;

displaying the image projection through the AR display; and

displaying a fluoroscopic image from the fluoroscopic device that is aligned with the body of the person and the image projection based on the image visible marker or the position and orientation of the fluoroscopic device, using the AR display.

 

Aligning AR to patient and live fluoroscopy

 

  1. A method for using an augmented reality (AR) display to align a fluoroscopic image with respect to a body of a person that has one or more optical codes by using a position and orientation of a fluoroscopic device that has one or more optical codes, comprising:

identifying the one or more optical codes on the body of the person and on the fluoroscopic device, which is mobile with respect to the body of the person, using an optical sensor of the AR display;

determining the position and orientation of the fluoroscopic device with respect to the body of the person using the one or more optical codes on the fluoroscopic device; and

displaying, using the AR display, a fluoroscopic image from the fluoroscopic device aligned with the body of the person by referencing the optical codes on the body of the person and the position and orientation of the fluoroscopic device.

 

Optical codes and alignment of AR to real-time ultrasound

 

  1. A method for using an augmented reality (AR) display to align an ultrasonic image with respect to a body of a person, comprising:

detecting visual image data of a portion of the body of a person and a ultrasonic transducer using an optical sensor of the AR display;

identifying one or more optical codes on the body of the person and on the ultrasonic transducer;

determining a position and orientation of the ultrasonic transducer with respect to the body of the person using the one or more optical codes on the ultrasonic transducer; and

displaying an ultrasonic image from the ultrasonic transducer that is aligned with the body of the person by referencing the optical codes on the body of the person and the position and orientation of the ultrasonic transducer, using the AR display.

 

Confirming correct patient body portion and medical implement are in medical procedure

  1. A method for validating a medical procedure using optical codes:

detecting visual image data of a portion of a body of a patient and a medical implement, using an optical sensor of an AR headset;

identifying one or more optical codes visibly displayed on the body of the patient and on the medical implement, wherein the one or more optical codes on the body of the patient are in a fixed position relative to an image visible marker;

aligning an image data set with the body of the patient using a known fixed position of the image visible marker with reference to the one or more optical codes on the body of the patient;

confirming that a correct patient is in the medical procedure based on a correct alignment of a surface of the body of the patient aligning with a surface of the image data set; and

confirming that a correct portion of the body and a correct medical implement are in the medical procedure using the one or more optical codes.

Patent Number: 11,287,874

Summary: The allowed application serial number 17/706,462

 

 

Independent Claims:

 

Referencing medical implement to person and image data set

  1. A method for using an augmented reality (AR) headset to co-localize an image data set and a medical implement that has one or more optical codes, comprising:

detecting the medical implement using a sensor of the AR headset;

identifying one or more optical codes associated with a body of a person and the medical implement, wherein an optical code on the body of the person is located in a fixed position relative to an image visible marker;

aligning the image data set with the body of the person using one or more optical codes on the body of the person as viewed through the AR headset and using the fixed position of the image visible marker with respect to the optical code as referenced to a representation of the image visible marker in the image data set; and

determining a position of the medical implement with respect to the body of the person using one or more optical codes on the medical implement and the body of the person to enable the medical implement to be referenced to the image data set and the body of the person.

 

AR, optical codes and fluoroscopy with image data set

  1. A method for using an augmented reality (AR) headset to align a fluoroscopic image with respect to a body of a person and an image projection from an image data set, comprising:

detecting a fluoroscopic device, which is mobile with respect to the body of the person, using a sensor of the AR headset;

identifying one or more optical codes on the body of the person and on the fluoroscopic device, wherein one or more optical codes on the body of the person have a fixed position relative to an image visible marker;

aligning an image data set of the body of the person using the fixed position of the image visible marker relative to the one or more optical codes on the body of the person as viewed through the AR headset;

determining a position and orientation of the fluoroscopic device with respect to the body of the person using the one or more optical codes on the fluoroscopic device;

displaying an image projection of the image data set based in part on the position and orientation of the fluoroscopic device; and

displaying a fluoroscopic image from the fluoroscopic device that is aligned with the body of the person and the image projection based on the image visible marker or the position and orientation of the fluoroscopic device, using the AR headset.

 

Aligning AR to patient and live fluoroscopy

  1. A method, comprising:

identifying one or more optical codes on a body of a person and on a fluoroscopic device, which is mobile with respect to the body of the person, using a sensor of an AR (augmented reality) headset;

determining a position and orientation of the fluoroscopic device with respect to the body of the person using the one or more optical codes associated with the fluoroscopic device; and

displaying, using the AR headset, a fluoroscopic image from the fluoroscopic device aligned with the body of the person by referencing the optical codes on the body of the person and the position and orientation of the fluoroscopic device.

 

Optical codes and alignment of AR to real-time ultrasound

  1. A method, comprising:

detecting an ultrasonic transducer using a sensor of an AR (augmented reality) headset; and

displaying an ultrasonic image from the ultrasonic transducer that is aligned with a body of a person based on a position and orientation of the ultrasonic transducer, using the AR headset.

 

Confirming correct patient and medical implement are in medical procedure

  1. A method, comprising:

detecting a portion of a body of a patient and a medical implement, using a sensor of an AR headset;

identifying one or more optical codes visibly displayed on the body of the patient and on the medical implement;

aligning an image data set with the body of the patient;

confirming that a correct patient is in a medical procedure based on a correct alignment of a surface of the body of the patient aligning with a surface of the image data set; and

confirming that a correct portion of the body and a correct medical implement are in the medical procedure using the one or more optical codes.

 

Optical codes and position of ultrasound transducer used to position ultrasound in AR

  1. A method, comprising:

identifying one or more optical codes on an ultrasonic transducer using an AR headset; and

displaying an ultrasonic image from the ultrasonic transducer aligned with a body of a person by using a position and orientation of the ultrasonic transducer as determined by detecting the one or more optical codes on the ultrasonic transducer, using the AR headset.

 

Position of ultrasound transducer detected using AR headset for positioning of ultrasound

  1. A method, comprising:

detecting a position and orientation of an ultrasonic transducer using a sensor of an AR (augmented reality) headset;

identifying one or more optical codes on a body of a person using the AR headset;

aligning an image data set with the body of the person as viewed through the AR headset, using the one or more optical codes on the body of the person; and

aligning an ultrasonic image from the ultrasonic transducer with the body of a person by referencing the position and orientation of the ultrasonic transducer, using the AR headset.

Summary:

A technology is described for aligning an image data set with a patient using an augmented reality (AR) headset. A method may include obtaining an image data set representing an anatomical structure of a patient. A two-dimensional (2D) X-ray generated image of at least a portion of the anatomical structure of the patient in the image data set and a visible marker may be obtained. The image data set can be aligned to the X-ray generated image by using data fitting. A location of the visible marker may be defined in the image data set using alignment with the X-ray generated image. The image data set may be aligned with a body of the patient, using the visible marker in the image data set as referenced to the visible marker seen on the patient through the AR headset.

Patent Number: 11,948,265

Patents Pending

  • Using Optical Codes with Augmented Reality Displays

  • Calibration for Augmented Reality

  • Secure Access to Stored Data Files Using Tokens Encoded in Optical Codes

  • Alignment of Medical Images in Augmented Reality Displays

  • And many more pending

Learn more about VisAR