doi.org/10.5281/zenodo.4249627

Preview meta tags from the doi.org website.

Linked Hostnames

16

Search Engine Appearance

Google

https://doi.org/10.5281/zenodo.4249627

2d U-net models trained to segment human placental maternal/fetal blood volumes and blood vessels from syncrotron micro-CT data along with a sample data volume.

This dataset contains a 512 x 512 x 512 pixel volume taken from an imaging dataset of human placental tissue collected at Diamond Light Source Manchester Imaging Branchline, I13-2 on visits MG23941 and MG22562 using in-line high-resolution synchrotron-sourced phase contrast micro-computed X-ray tomography. This data is saved in HDF5 format with a uint8 datatype. Alongside this are two 2d binary U-net models that have been trained to segment this data. One model segments the data into regions of maternal/fetal blood volume, the other segments the blood vessels. Both models were trained using the fastai python package, which utilises the pytorch library. These models were used to segment the data in our paper "A massively multi-scale approach to characterising tissue architecture by synchrotron micro-CT applied to the human placenta" which can be found at https://www.biorxiv.org/content/10.1101/2020.12.07.411462v1. The code used for training the U-net models and for predicting the segmentation of the data volume can be found at https://github.com/DiamondLightSource/placental-segmentation-2dunet and is published at https://doi.org/10.5281/zenodo.4252562 



Bing

2d U-net models trained to segment human placental maternal/fetal blood volumes and blood vessels from syncrotron micro-CT data along with a sample data volume.

https://doi.org/10.5281/zenodo.4249627

This dataset contains a 512 x 512 x 512 pixel volume taken from an imaging dataset of human placental tissue collected at Diamond Light Source Manchester Imaging Branchline, I13-2 on visits MG23941 and MG22562 using in-line high-resolution synchrotron-sourced phase contrast micro-computed X-ray tomography. This data is saved in HDF5 format with a uint8 datatype. Alongside this are two 2d binary U-net models that have been trained to segment this data. One model segments the data into regions of maternal/fetal blood volume, the other segments the blood vessels. Both models were trained using the fastai python package, which utilises the pytorch library. These models were used to segment the data in our paper "A massively multi-scale approach to characterising tissue architecture by synchrotron micro-CT applied to the human placenta" which can be found at https://www.biorxiv.org/content/10.1101/2020.12.07.411462v1. The code used for training the U-net models and for predicting the segmentation of the data volume can be found at https://github.com/DiamondLightSource/placental-segmentation-2dunet and is published at https://doi.org/10.5281/zenodo.4252562 



DuckDuckGo

https://doi.org/10.5281/zenodo.4249627

2d U-net models trained to segment human placental maternal/fetal blood volumes and blood vessels from syncrotron micro-CT data along with a sample data volume.

This dataset contains a 512 x 512 x 512 pixel volume taken from an imaging dataset of human placental tissue collected at Diamond Light Source Manchester Imaging Branchline, I13-2 on visits MG23941 and MG22562 using in-line high-resolution synchrotron-sourced phase contrast micro-computed X-ray tomography. This data is saved in HDF5 format with a uint8 datatype. Alongside this are two 2d binary U-net models that have been trained to segment this data. One model segments the data into regions of maternal/fetal blood volume, the other segments the blood vessels. Both models were trained using the fastai python package, which utilises the pytorch library. These models were used to segment the data in our paper "A massively multi-scale approach to characterising tissue architecture by synchrotron micro-CT applied to the human placenta" which can be found at https://www.biorxiv.org/content/10.1101/2020.12.07.411462v1. The code used for training the U-net models and for predicting the segmentation of the data volume can be found at https://github.com/DiamondLightSource/placental-segmentation-2dunet and is published at https://doi.org/10.5281/zenodo.4252562 

  • General Meta Tags

    27
    • title
      2d U-net models trained to segment human placental maternal/fetal blood volumes and blood vessels from syncrotron micro-CT data along with a sample data volume.
    • charset
      utf-8
    • X-UA-Compatible
      IE=edge
    • viewport
      width=device-width, initial-scale=1
    • google-site-verification
      5fPGCLllnWrvFxH9QWI0l1TadV7byeEvfPcyK2VkS_s
  • Open Graph Meta Tags

    4
    • og:title
      2d U-net models trained to segment human placental maternal/fetal blood volumes and blood vessels from syncrotron micro-CT data along with a sample data volume.
    • og:description
      This dataset contains a 512 x 512 x 512 pixel volume taken from an imaging dataset of human placental tissue collected at Diamond Light Source Manchester Imaging Branchline, I13-2 on visits MG23941 and MG22562 using in-line high-resolution synchrotron-sourced phase contrast micro-computed X-ray tomography. This data is saved in HDF5 format with a uint8 datatype. Alongside this are two 2d binary U-net models that have been trained to segment this data. One model segments the data into regions of maternal/fetal blood volume, the other segments the blood vessels. Both models were trained using the fastai python package, which utilises the pytorch library. These models were used to segment the data in our paper "A massively multi-scale approach to characterising tissue architecture by synchrotron micro-CT applied to the human placenta" which can be found at https://www.biorxiv.org/content/10.1101/2020.12.07.411462v1. The code used for training the U-net models and for predicting the segmentation of the data volume can be found at https://github.com/DiamondLightSource/placental-segmentation-2dunet and is published at https://doi.org/10.5281/zenodo.4252562 
    • og:url
      https://zenodo.org/records/4249627
    • og:site_name
      Zenodo
  • Twitter Meta Tags

    4
    • twitter:card
      summary
    • twitter:site
      @zenodo_org
    • twitter:title
      2d U-net models trained to segment human placental maternal/fetal blood volumes and blood vessels from syncrotron micro-CT data along with a sample data volume.
    • twitter:description
      This dataset contains a 512 x 512 x 512 pixel volume taken from an imaging dataset of human placental tissue collected at Diamond Light Source Manchester Imaging Branchline, I13-2 on visits MG23941 and MG22562 using in-line high-resolution synchrotron-sourced phase contrast micro-computed X-ray tomography. This data is saved in HDF5 format with a uint8 datatype. Alongside this are two 2d binary U-net models that have been trained to segment this data. One model segments the data into regions of maternal/fetal blood volume, the other segments the blood vessels. Both models were trained using the fastai python package, which utilises the pytorch library. These models were used to segment the data in our paper "A massively multi-scale approach to characterising tissue architecture by synchrotron micro-CT applied to the human placenta" which can be found at https://www.biorxiv.org/content/10.1101/2020.12.07.411462v1. The code used for training the U-net models and for predicting the segmentation of the data volume can be found at https://github.com/DiamondLightSource/placental-segmentation-2dunet and is published at https://doi.org/10.5281/zenodo.4252562 
  • Link Tags

    13
    • alternate
      https://zenodo.org/records/4249627/files/specimen1_placental_blood_vessels_2dUnet.pkl
    • alternate
      https://zenodo.org/records/4249627/files/specimen1_512cube_zyx_800-1312_1000-1512_700-1212_DATA.h5
    • alternate
      https://zenodo.org/records/4249627/files/specimen1_placental_blood_volumes_2dUnet.pkl
    • apple-touch-icon
      /static/apple-touch-icon-120.png
    • apple-touch-icon
      /static/apple-touch-icon-152.png

Links

60