doi.org/10.5281/zenodo.4249627
Preview meta tags from the doi.org website.
Linked Hostnames
16- 28 links todoi.org
- 10 links toabout.zenodo.org
- 3 links tohelp.zenodo.org
- 3 links toorcid.org
- 2 links todevelopers.zenodo.org
- 2 links togithub.com
- 2 links tohome.cern
- 2 links tozenodo.org
Search Engine Appearance
2d U-net models trained to segment human placental maternal/fetal blood volumes and blood vessels from syncrotron micro-CT data along with a sample data volume.
This dataset contains a 512 x 512 x 512 pixel volume taken from an imaging dataset of human placental tissue collected at Diamond Light Source Manchester Imaging Branchline, I13-2 on visits MG23941 and MG22562 using in-line high-resolution synchrotron-sourced phase contrast micro-computed X-ray tomography. This data is saved in HDF5 format with a uint8 datatype. Alongside this are two 2d binary U-net models that have been trained to segment this data. One model segments the data into regions of maternal/fetal blood volume, the other segments the blood vessels. Both models were trained using the fastai python package, which utilises the pytorch library. These models were used to segment the data in our paper "A massively multi-scale approach to characterising tissue architecture by synchrotron micro-CT applied to the human placenta" which can be found at https://www.biorxiv.org/content/10.1101/2020.12.07.411462v1. The code used for training the U-net models and for predicting the segmentation of the data volume can be found at https://github.com/DiamondLightSource/placental-segmentation-2dunet and is published at https://doi.org/10.5281/zenodo.4252562
Bing
2d U-net models trained to segment human placental maternal/fetal blood volumes and blood vessels from syncrotron micro-CT data along with a sample data volume.
This dataset contains a 512 x 512 x 512 pixel volume taken from an imaging dataset of human placental tissue collected at Diamond Light Source Manchester Imaging Branchline, I13-2 on visits MG23941 and MG22562 using in-line high-resolution synchrotron-sourced phase contrast micro-computed X-ray tomography. This data is saved in HDF5 format with a uint8 datatype. Alongside this are two 2d binary U-net models that have been trained to segment this data. One model segments the data into regions of maternal/fetal blood volume, the other segments the blood vessels. Both models were trained using the fastai python package, which utilises the pytorch library. These models were used to segment the data in our paper "A massively multi-scale approach to characterising tissue architecture by synchrotron micro-CT applied to the human placenta" which can be found at https://www.biorxiv.org/content/10.1101/2020.12.07.411462v1. The code used for training the U-net models and for predicting the segmentation of the data volume can be found at https://github.com/DiamondLightSource/placental-segmentation-2dunet and is published at https://doi.org/10.5281/zenodo.4252562
DuckDuckGo
2d U-net models trained to segment human placental maternal/fetal blood volumes and blood vessels from syncrotron micro-CT data along with a sample data volume.
This dataset contains a 512 x 512 x 512 pixel volume taken from an imaging dataset of human placental tissue collected at Diamond Light Source Manchester Imaging Branchline, I13-2 on visits MG23941 and MG22562 using in-line high-resolution synchrotron-sourced phase contrast micro-computed X-ray tomography. This data is saved in HDF5 format with a uint8 datatype. Alongside this are two 2d binary U-net models that have been trained to segment this data. One model segments the data into regions of maternal/fetal blood volume, the other segments the blood vessels. Both models were trained using the fastai python package, which utilises the pytorch library. These models were used to segment the data in our paper "A massively multi-scale approach to characterising tissue architecture by synchrotron micro-CT applied to the human placenta" which can be found at https://www.biorxiv.org/content/10.1101/2020.12.07.411462v1. The code used for training the U-net models and for predicting the segmentation of the data volume can be found at https://github.com/DiamondLightSource/placental-segmentation-2dunet and is published at https://doi.org/10.5281/zenodo.4252562
General Meta Tags
27- title2d U-net models trained to segment human placental maternal/fetal blood volumes and blood vessels from syncrotron micro-CT data along with a sample data volume.
- charsetutf-8
- X-UA-CompatibleIE=edge
- viewportwidth=device-width, initial-scale=1
- google-site-verification5fPGCLllnWrvFxH9QWI0l1TadV7byeEvfPcyK2VkS_s
Open Graph Meta Tags
4- og:title2d U-net models trained to segment human placental maternal/fetal blood volumes and blood vessels from syncrotron micro-CT data along with a sample data volume.
- og:descriptionThis dataset contains a 512 x 512 x 512 pixel volume taken from an imaging dataset of human placental tissue collected at Diamond Light Source Manchester Imaging Branchline, I13-2 on visits MG23941 and MG22562 using in-line high-resolution synchrotron-sourced phase contrast micro-computed X-ray tomography. This data is saved in HDF5 format with a uint8 datatype. Alongside this are two 2d binary U-net models that have been trained to segment this data. One model segments the data into regions of maternal/fetal blood volume, the other segments the blood vessels. Both models were trained using the fastai python package, which utilises the pytorch library. These models were used to segment the data in our paper "A massively multi-scale approach to characterising tissue architecture by synchrotron micro-CT applied to the human placenta" which can be found at https://www.biorxiv.org/content/10.1101/2020.12.07.411462v1. The code used for training the U-net models and for predicting the segmentation of the data volume can be found at https://github.com/DiamondLightSource/placental-segmentation-2dunet and is published at https://doi.org/10.5281/zenodo.4252562
- og:urlhttps://zenodo.org/records/4249627
- og:site_nameZenodo
Twitter Meta Tags
4- twitter:cardsummary
- twitter:site@zenodo_org
- twitter:title2d U-net models trained to segment human placental maternal/fetal blood volumes and blood vessels from syncrotron micro-CT data along with a sample data volume.
- twitter:descriptionThis dataset contains a 512 x 512 x 512 pixel volume taken from an imaging dataset of human placental tissue collected at Diamond Light Source Manchester Imaging Branchline, I13-2 on visits MG23941 and MG22562 using in-line high-resolution synchrotron-sourced phase contrast micro-computed X-ray tomography. This data is saved in HDF5 format with a uint8 datatype. Alongside this are two 2d binary U-net models that have been trained to segment this data. One model segments the data into regions of maternal/fetal blood volume, the other segments the blood vessels. Both models were trained using the fastai python package, which utilises the pytorch library. These models were used to segment the data in our paper "A massively multi-scale approach to characterising tissue architecture by synchrotron micro-CT applied to the human placenta" which can be found at https://www.biorxiv.org/content/10.1101/2020.12.07.411462v1. The code used for training the U-net models and for predicting the segmentation of the data volume can be found at https://github.com/DiamondLightSource/placental-segmentation-2dunet and is published at https://doi.org/10.5281/zenodo.4252562
Link Tags
13- alternatehttps://zenodo.org/records/4249627/files/specimen1_placental_blood_vessels_2dUnet.pkl
- alternatehttps://zenodo.org/records/4249627/files/specimen1_512cube_zyx_800-1312_1000-1512_700-1212_DATA.h5
- alternatehttps://zenodo.org/records/4249627/files/specimen1_placental_blood_volumes_2dUnet.pkl
- apple-touch-icon/static/apple-touch-icon-120.png
- apple-touch-icon/static/apple-touch-icon-152.png
Links
60- https://about.zenodo.org
- https://about.zenodo.org/contact
- https://about.zenodo.org/cookie-policy
- https://about.zenodo.org/infrastructure
- https://about.zenodo.org/policies