YixunLiang.github.io/ReTR

Preview meta tags from the yixunliang.github.io website.

Linked Hostnames

5

Thumbnail

Search Engine Appearance

Google

https://yixunliang.github.io/ReTR

ReTR: Modeling Rendering via Transformer for Generalizable Neural Surface Reconstruction

Generalizable neural surface reconstruction techniques have attracted great attention in recent years. However, they encounter limitations of low confidence depth distribution and inaccurate surface reasoning due to the oversimplified volume rendering process employed. In this paper, we present Reconstruction TRansformer (ReTR), a novel framework that leverages the transformer architecture to redesign the rendering process, enabling complex photon-particle interaction modeling. It introduces a learnable meta-ray token and utilizes the cross-attention mechanism to simulate the interaction of photons with sampled points and render the observed color. Meanwhile, by operating within a high-dimensional feature space rather than the color space, ReTR mitigates sensitivity to projected colors in source views. Such improvements result in accurate surface assessment with high confidence. We demonstrate the effectiveness of our approach on various datasets, showcasing how our method outperforms the current state-of-the-art approaches in terms of reconstruction quality and generalization ability.



Bing

ReTR: Modeling Rendering via Transformer for Generalizable Neural Surface Reconstruction

https://yixunliang.github.io/ReTR

Generalizable neural surface reconstruction techniques have attracted great attention in recent years. However, they encounter limitations of low confidence depth distribution and inaccurate surface reasoning due to the oversimplified volume rendering process employed. In this paper, we present Reconstruction TRansformer (ReTR), a novel framework that leverages the transformer architecture to redesign the rendering process, enabling complex photon-particle interaction modeling. It introduces a learnable meta-ray token and utilizes the cross-attention mechanism to simulate the interaction of photons with sampled points and render the observed color. Meanwhile, by operating within a high-dimensional feature space rather than the color space, ReTR mitigates sensitivity to projected colors in source views. Such improvements result in accurate surface assessment with high confidence. We demonstrate the effectiveness of our approach on various datasets, showcasing how our method outperforms the current state-of-the-art approaches in terms of reconstruction quality and generalization ability.



DuckDuckGo

https://yixunliang.github.io/ReTR

ReTR: Modeling Rendering via Transformer for Generalizable Neural Surface Reconstruction

Generalizable neural surface reconstruction techniques have attracted great attention in recent years. However, they encounter limitations of low confidence depth distribution and inaccurate surface reasoning due to the oversimplified volume rendering process employed. In this paper, we present Reconstruction TRansformer (ReTR), a novel framework that leverages the transformer architecture to redesign the rendering process, enabling complex photon-particle interaction modeling. It introduces a learnable meta-ray token and utilizes the cross-attention mechanism to simulate the interaction of photons with sampled points and render the observed color. Meanwhile, by operating within a high-dimensional feature space rather than the color space, ReTR mitigates sensitivity to projected colors in source views. Such improvements result in accurate surface assessment with high confidence. We demonstrate the effectiveness of our approach on various datasets, showcasing how our method outperforms the current state-of-the-art approaches in terms of reconstruction quality and generalization ability.

  • General Meta Tags

    5
    • title
      Rethinking Rendering in Generalizable Neural Surface Reconstruction: A Learning-based Solution
    • Content-Type
      text/html; charset=UTF-8
    • x-ua-compatible
      ie=edge
    • description
    • viewport
      width=device-width, initial-scale=1
  • Open Graph Meta Tags

    8
    • og:image
      https://dorverbin.github.io/refnerf/img/refneus_titlecard.jpg
    • og:image:type
      image/png
    • og:image:width
      1200
    • og:image:height
      630
    • og:type
      website
  • Link Tags

    5
    • icon
      data:image/svg+xml,<svg xmlns=%22http://www.w3.org/2000/svg%22 viewBox=%220 0 100 100%22><text y=%22.9em%22 font-size=%2290%22>%E2%9C%A8</text></svg>
    • stylesheet
      css/bootstrap.min.css
    • stylesheet
      css/font-awesome.min.css
    • stylesheet
      css/codemirror.min.css
    • stylesheet
      css/app.css

Links

5