ieeexplore.ieee.org/document/7050696

Preview meta tags from the ieeexplore.ieee.org website.

Linked Hostnames

2

Thumbnail

Search Engine Appearance

Google

https://ieeexplore.ieee.org/document/7050696

A multimodal approach for image de-fencing and depth inpainting

Low cost RGB-D sensors such as the Microsoft Kinect have enabled the use of depth data along with color images. In this work, we propose a multi-modal approach to address the problem of removal of fences/occlusions from images captured using a Kinect camera. We also perform depth completion by fusing data from multiple recorded depth maps affected by occlusions. The availability of aligned image and depth data from Kinect aids us in the detection of the fence locations. However, accurate estimation of the relative shifts between the captured color frames is necessary. Initially, for the case of static scene elements with simple relative motion between the camera and the objects, we propose the use of affine scale-invariant feature transform descriptor (ASIFT) to compute the relative global displacements. We also address the scenario wherein the relative motion between the frames may not be global using the depth map obtained by Kinect. For such a scenario involving complex motion of scene pixels, we use a recently proposed robust optical flow technique. We show results for challenging real-world data wherein the scene is dynamic. The inverse ill-posed problems of estimation of the de-fenced image and the inpainted depth map are solved using an optimization-based framework. Specifically, we model the unoccluded image and the completed depth map as two distinct Markov random fields, respectively, and obtain their maximum a-posteriori estimates using loopy belief propagation.



Bing

A multimodal approach for image de-fencing and depth inpainting

https://ieeexplore.ieee.org/document/7050696

Low cost RGB-D sensors such as the Microsoft Kinect have enabled the use of depth data along with color images. In this work, we propose a multi-modal approach to address the problem of removal of fences/occlusions from images captured using a Kinect camera. We also perform depth completion by fusing data from multiple recorded depth maps affected by occlusions. The availability of aligned image and depth data from Kinect aids us in the detection of the fence locations. However, accurate estimation of the relative shifts between the captured color frames is necessary. Initially, for the case of static scene elements with simple relative motion between the camera and the objects, we propose the use of affine scale-invariant feature transform descriptor (ASIFT) to compute the relative global displacements. We also address the scenario wherein the relative motion between the frames may not be global using the depth map obtained by Kinect. For such a scenario involving complex motion of scene pixels, we use a recently proposed robust optical flow technique. We show results for challenging real-world data wherein the scene is dynamic. The inverse ill-posed problems of estimation of the de-fenced image and the inpainted depth map are solved using an optimization-based framework. Specifically, we model the unoccluded image and the completed depth map as two distinct Markov random fields, respectively, and obtain their maximum a-posteriori estimates using loopy belief propagation.



DuckDuckGo

https://ieeexplore.ieee.org/document/7050696

A multimodal approach for image de-fencing and depth inpainting

Low cost RGB-D sensors such as the Microsoft Kinect have enabled the use of depth data along with color images. In this work, we propose a multi-modal approach to address the problem of removal of fences/occlusions from images captured using a Kinect camera. We also perform depth completion by fusing data from multiple recorded depth maps affected by occlusions. The availability of aligned image and depth data from Kinect aids us in the detection of the fence locations. However, accurate estimation of the relative shifts between the captured color frames is necessary. Initially, for the case of static scene elements with simple relative motion between the camera and the objects, we propose the use of affine scale-invariant feature transform descriptor (ASIFT) to compute the relative global displacements. We also address the scenario wherein the relative motion between the frames may not be global using the depth map obtained by Kinect. For such a scenario involving complex motion of scene pixels, we use a recently proposed robust optical flow technique. We show results for challenging real-world data wherein the scene is dynamic. The inverse ill-posed problems of estimation of the de-fenced image and the inpainted depth map are solved using an optimization-based framework. Specifically, we model the unoccluded image and the completed depth map as two distinct Markov random fields, respectively, and obtain their maximum a-posteriori estimates using loopy belief propagation.

  • General Meta Tags

    12
    • title
      A multimodal approach for image de-fencing and depth inpainting | IEEE Conference Publication | IEEE Xplore
    • google-site-verification
      qibYCgIKpiVF_VVjPYutgStwKn-0-KBB6Gw4Fc57FZg
    • Description
      Low cost RGB-D sensors such as the Microsoft Kinect have enabled the use of depth data along with color images. In this work, we propose a multi-modal approach
    • Content-Type
      text/html; charset=utf-8
    • viewport
      width=device-width, initial-scale=1.0
  • Open Graph Meta Tags

    3
    • og:image
      https://ieeexplore.ieee.org/assets/img/ieee_logo_smedia_200X200.png
    • og:title
      A multimodal approach for image de-fencing and depth inpainting
    • og:description
      Low cost RGB-D sensors such as the Microsoft Kinect have enabled the use of depth data along with color images. In this work, we propose a multi-modal approach to address the problem of removal of fences/occlusions from images captured using a Kinect camera. We also perform depth completion by fusing data from multiple recorded depth maps affected by occlusions. The availability of aligned image and depth data from Kinect aids us in the detection of the fence locations. However, accurate estimation of the relative shifts between the captured color frames is necessary. Initially, for the case of static scene elements with simple relative motion between the camera and the objects, we propose the use of affine scale-invariant feature transform descriptor (ASIFT) to compute the relative global displacements. We also address the scenario wherein the relative motion between the frames may not be global using the depth map obtained by Kinect. For such a scenario involving complex motion of scene pixels, we use a recently proposed robust optical flow technique. We show results for challenging real-world data wherein the scene is dynamic. The inverse ill-posed problems of estimation of the de-fenced image and the inpainted depth map are solved using an optimization-based framework. Specifically, we model the unoccluded image and the completed depth map as two distinct Markov random fields, respectively, and obtain their maximum a-posteriori estimates using loopy belief propagation.
  • Twitter Meta Tags

    1
    • twitter:card
      summary
  • Link Tags

    9
    • canonical
      https://ieeexplore.ieee.org/document/7050696
    • icon
      /assets/img/favicon.ico
    • stylesheet
      https://ieeexplore.ieee.org/assets/css/osano-cookie-consent-xplore.css
    • stylesheet
      /assets/css/simplePassMeter.min.css?cv=20250812_00000
    • stylesheet
      /assets/dist/ng-new/styles.css?cv=20250812_00000

Links

17