clara_ct_annotation_spleen_amp is a pre-trained model for volumetric (3D) annotation of the spleen from CT image trained with Mixed Precision mode.
This model was trained using our AIAA 3D model (dextr3D) using a training approach similar to [1] where the user input (extreme point clicks) is modeled as 3D Gaussians in an additional input channel to the network. The network architecture is derived from [2] and initialized with ImageNet pre-trained weights. During training point clicks are "simulated" from the ground truth labels.
This model is trained with 32 training image pairs and 9 validation images.
The training dataset is Task09_Spleen.tar from http://medicaldecathlon.com/.
The data was converted to 1mm resolution before training:
nvmidl-dataconvert -d ${SOURCE_IMAGE_ROOT} -r 1 -s .nii.gz -e .nii.gz -o ${DESTINATION_IMAGE_ROOT}
NOTE: to match up with the default setting, we suggest that ${DESTINATION_IMAGE_ROOT} match DATA_ROOT as defined in environment.json in this MMAR's config folder.
The training was performed with command train.sh, which required 16GB-memory GPUs.
Training Graph Input Shape: 128 x 128 x 128
Input: 1 channel CT image
Output: 2 channels for background & foreground
This model achieves the following Dice score on the validation data (our own split from the training dataset):
In order to access this model please apply for access
https://developer.nvidia.com/clara
This model is usable only as part of Transfer Learning & Annotation Tools in Clara Train SDK container. You can download the model from NGC registry as described in Getting Started Guide.
This model is only compatible with Clara Train SDK v2.0 and will not work with v1.1 and v1.0.
This is an example, not to be used for diagnostic purposes
End User License Agreement is included with the product. Licenses are also available along with the model application zip file. By pulling and using the Clara Train SDK container and downloading models, you accept the terms and conditions of these licenses.
[1] Maninis, Kevis-Kokitsi, et al. "Deep extreme cut: From extreme points to object segmentation." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018. https://arxiv.org/abs/1711.09081.
[2] Liu, Siqi, et al. "3d anisotropic hybrid network: Transferring convolutional features from 2d images to 3d anisotropic volumes." International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Cham, 2018. https://arxiv.org/abs/1711.08580.