This section of the user guide contains (a growing collection of) illustrative worked out use cases where dcmqi
was applied.
Segmentations of
grey and white matter (labels 4 and 5 in this NIfTI file)
left and right cerebral hemispheres (labels 2 and 3 in this NIfTI file)
overall brain mask (label 1 in this NIfTI file)
were produced by FreeSurfer by segmenting this DICOM MR series.
We will go over the process of converting this representation into a standardized DICOM Segmentation image object.
As a first step, it is always a good idea to visualize the input data. We recommend 3D Slicer for visualizing both DICOM and research format data.
Import DICOM dataset into Slicer DICOM database by unzipping the downloaded file with the MR series, opening DICOM Browser, and importing the directory with the DICOM MR series files (choose "Add link" when prompted).
Next, load segmentations stored in NIfTI. To do that, open "Add data", select the NIfTI segmentation files, check "Options" flag, and check "LabelMap" for each of the files in the list. This will tell Slicer that the files you are loading contain segmentations.
You should be able to see image with the segmentation for one of the labels shown in the overlay. If you mouse over the little pin icon in the corner of the slice viewer (orange arrow in the screenshot below), then expand the popup panel (red arrow), you can select each of the label files in the label selector (green arrow).
To figure out the value for the individual labels in the overlay, you can mouse over the label, and then look in the Data Probe panel in the bottom left corner of the Slicer window (red arrow in the screenshot below).
Using the hints discussed above, we can figure out that:
1 = overall brain mask label in brainmask-dcmspace.nii.gz
2 = left hemisphere label in lhrh-dcmspace.nii.gz
3 = right hemisphere label in lhrh-dcmspace.nii.gz
4 = gray matter label in gmwm-dcmspace.nii.gz
5 = white matter label in gmwm-dcmspace.nii.gz
We will need the information above in the process of data conversion!
In order to do the conversion, we need to pass extra metadata that would describe the segmentations to the converter.
We can use the web application that accompanies dcmqi
to populate the metadata JSON file: http://qiicr.org/dcmqi/#/seg.
We will need to create a new segment item in the web application for each of the labels we identified in the steps above. In our case, the structures we have are anatomical structures, and so this is what should be selected from the (short) list under "Segmented Property Category". "Segmented Property Type" contains the list of all items that belong to a category. Its drop-down contains a lot of items - start typing the name of the structure you are looking for, and it will suggest the matching names for you! The result should look like this:
Next let's add new segments for left and right hemisphere, and gray and white matter in this specific order. Add new segment by pushing the "+" button in the web application interface:
Note that you will not find codes for brain cerebral hemispheres in the drop-down - select brain ventricle as a placeholder, and select modifiers to be Left and Right for segments 2 and 3. We will put the proper codes at a later stage.
After populating all segments, you should see the following metadata for each of the individual segments in the webapp:
Next, click OK button in the webapp to generate the JSON metafile, and download it to your computer.
It should look something like this.
Some extra work is needed to finalize the JSON file you generated.
Open JSON file in your favorite text editor (we recommend Atom, a hackable text editor developed by GitHub). If you use Atom, we also recommend you install pretty-json extension, as it will help you validate your JSON file and make it more readable by adjusting indentation.
We will need to complete two tasks to finalize the JSON file.
First, we need to replace "Brain ventricle" and assign the proper code to the cerebral hemispheres. To find the code, open DICOM part 16 in your browser (be patient, this is a very large page!). Once loading is completed, search for the word "hemisphere" on that page. One of the hits leads us to table CID 12022, which contains the code we need!
Replace all occurrences of "Brain ventricle" in your JSON file with "Cerebral hemisphere", and all codes "T-A1600" with "T-A010F". The result should look like this.
Second, we need to reshuffle the content of the file. The "segmentAttributes" attribute of the JSON file is a list of lists, where each of the inner lists corresponds to one input file passed to the converter and contains metainformation for each of the segments. In our case, we have 3 input files. In the JSON file produced by the webapp, we have only one inner list with all of the segments. Edit the JSON file to have the first list contain just the brain mask, second list contain left and right cerebral hemispheres, and the third list containing gray and white matter. The resulting file should look like this.
Now we should be ready to run the converter!
You can follow the installation instructions and use either binary package for your platform, or a Docker image.
Considering you have the MR series in a folder called CMET-MRhead
(it will be in that folder if you downloaded the dataset using the link on top of this page), segmentations in the files as specified on top, and your metadata JSON in meta.json
, you should be able to run this command line to do the conversion:
Once the converter completes without errors, the resulting DICOM SEG object should be in freesurfer_seg.dcm
!
As a prerequisite, make sure you have QuantitativeReporting
extension of 3D Slicer installed! Without this extension, 3D Slicer will not know how to interpret DICOM SEG data!
Once QuantitativeReporting
is installed, import the directory with the freesurfer_seg.dcm
file. You should then see a new series with modality SEG
in the DICOM Browser:
Select this series and load it. Slicer will prompt you whether you want to load the MR series which is referenced from the segmentation object. 3D rendering of the segmentation surfaces will be shown automatically in 3D and as overlay in the slice viewers. Switch to Segmentations module of 3D Slicer to control visibility and opacity of individual segments, access information about the semantics of the segments, and perform other operations.
This is it!
Too many to count, of course ... TBD
See more detailed instructions on how to search for the terms outside DICOM in this section of the manual.
We identified a potential solution to this problem, and will add a feature to dcmqi
to store segmentations in a compressed form: https://github.com/QIICR/dcmqi/issues/244
In this use case we will summarize the approach to encode segmentations of various structures and measurements derived using those segmentations from multi-parametric Magnetic Resonance Imaging (MRI) of the prostate.
More specifically, we will discuss encoding of the imaging-derived data discussed in the following paper:
Fedorov A, Vangel MG, Tempany CM, Fennessy FM. Multiparametric Magnetic Resonance Imaging of the Prostate: Repeatability of Volume and Apparent Diffusion Coefficient Quantification. Investigative Radiology; 2017;52(9):538–546 .
The imaging-derived data discussed in that paper consists of the two components:
Segmentations of the following structures:
Whole gland of the prostate
Peripheral zone of the prostate
Suspected tumor area of the prostate within the peripheral zone of the prostate
Normal-appearing area of the peripheral zone
Segmentation-based measurements: the manuscript is concerned with evaluating the repeatability of the
mean volume of the regions listed above, and
mean Apparent Diffusion Coefficient (ADC) values calculated over the segmentation-defined regions.
Each segmented structure is saved in an individual itk nrrd format file.
It is advisable to organize the original dicom image files in a directory structure like <Patient>/<Study>/<Series>/orig-img-dicom/
. Since one required input of the converter are all dicom files which represent the original image on which the segmentation was created. Organizing the data as described above will allow us to just provide the correct series directory instead of listing all files of that series.
All segmentations corresponding to a series should then be put into a folder like <Patient>/<Study>/<Series>/segmentations/
. Into this folder we will also put the meta-information JSON file required to convert the segmentations.
The most confusing part is typically how to find the codes to encode specific items. In our case, some structures have codes already included in the DICOM standard, while some other codes had to be looked up in SNOMED.
Make sure to select DICOM master list in the "Segmentation Category Type Context Name". This will give you access to all of the codes in the standard.
Whole prostate gland: we encode anatomic structure (short selection list in the Category selector), and can quickly locate the code for the prostate:
Suspected tumor tissue: the most suitable code is "Lesion", but it is not available in the web interface we used before ... We will need to fix this... Note that for "Morphologically altered structure" we can (and should!) also specify "AnatomicRegionSequence" to encode the location of the lesion. In our case, all of the lesions are in the peripheral zone of the prostate.
Normal tissue: here we can use generic "Tissue" for category, and a special code (which again is not available in our web interface) for Normal tissue:
Other attributes are rather trivial to populate:
BodyPartExamined
should be PROSTATE
(all caps is important! ... we should add explanation to this ...)
SegmentDescription
can be populated with an abbreviated name reflecting the structure being segmented (note: this attribute can be at most 64 characters long!)
SegmentAlgorithmName
not applicable, since segmentation was done manually
SegmentAlgorithmType
to MANUAL
, since that is how segmentations were created
An almost complete meta-information JSON file for these cases could look like this:
Note that this file contains two placeholders which still need to be replaced with the correct value for the segmentations we are trying to convert: @TimePoint@
and @SeriesNumber@
. While all other properties in the JSON file are valid for all segmentation files, these two properties will differ for different segmentation files:
In this particular case we have segmentations of two timepoints, so we have to make sure the meta-information JSON file we use when running the converter has the correct timpoint encoded.
Segmentations based on one series in a study should also have a unique series number within that study. We can follow the following formula (or similar) to assign series number: <SeriesNumber of the image series>+1000
.
This means we will need several slightly different JSON files to perform the conversion. The best approach is to create them dynamically with a script that inserts the correct values for the placeholders. If we follow the data organization approach suggested above we should have one JSON file per <Patient>/<Study>/<Series>/segmentations/
folder.
The converter needs to run separately for each segmentation folder. Switch to a <Patient>/<Study>/<Series>/segmentations/
folder. Then run the converter like this:
This will place a file <out-name>.SEG.dcm
into the segmentations folder. The SEG object will contain all four segmentations.
Note that the order of the files for --inputImageList
has to exactly match the order of the segmentAttributes
list in the meta.json. If the order of files in --inputImageList
is different or contains less or more files, we need to adjust the meta.json accordingly.
DICOM image series that was used for segmentation
DICOM Segmentation image series containing the segmentation result
JSON file containing the individual measurements, and additional metadata needed by DICOM
Assuming the segmentation conversion above was successful, a JSON file is required that specifies metadata for the structured report overall, and for the individual measurements.
Before discussing how to initialize individual items, we need to decide how to organize measurements. There are at least two options here: 1. Save measurements for each combination of structure/image series as a separate DICOM SR document. 2. Save measurements for all structures segmented in a given series in a single DICOM SR document.
Considering we made a decision to save all segments (whole gland, peripheral zone, tumor region, etc) in a single DICOM SEG file, it is logical to follow the same pattern and store per-segment measurements in separate groups within the same DICOM SR.
Here are the items we will need to update:
Although SeriesNumber
is not formalized by DICOM, it is usually expected by the users that it should be unique. We can follow the following formula (or similar) to assign series number: <SeriesNumber of the image series being segmented>+2000
.
This item contains the name of a DICOM file that should be used to populate composite context (information about study, patient, equipment, which is the same for all series in a study) of the output DICOM object. We can set this item to be the Segmentation DICOM object, or any of the DICOM instances from the input image.
imageLibrary
should contain the list of all DICOM instances from the source image series being segmented. A component of the output SR document will include certain attributes of these instances, and will reference them by SOPInstanceUID
.
In our case, there was a single reader, so we can leave this item unchanged.
These items can also remain unchanged, since we share final measurements. The exception is timePoint
, which should have values of 1 for baseline, and 2 for the followup.
Next, we need to populate the list of Measurements
. Each of the items (measurement groups) in this list will contain a list of attributes that apply to all individual measurements within the group, and a list of individual measurements.
In our case, each measurement group will contain measurements calculated over a single segmented structure.
First, let's look at the top-level attributes:
TrackingIdentifier
is a human-readable description of the measurements group. We can use the pattern <structure name> measurements
, i.e., Whole gland measurements
etc.
These items should be propagated given the information about source image series and the segmentation generated in the previous conversion step. Availability of these items allows to link measurements with the results of segmentation and source image series.
ReferencedSegment
: The SegmentNumber for the corresponding segment in the DICOM segmentation object.
sourceSeriesInstanceUID
: The SeriesInstanceUID of the actual image from which the segmentation was created.
segmentationSOPInstanceUID
: The SOPInstanceUID of the segmentation object.
The next items -- Finding
and FindingSite
-- are code tuples that allows us to encode what was the region over which measurement was done, and where it was located. These items are somewhat similar to what we had to specify for encoding segmentation.
Here are the codes we can use for each of the structures over which we performed the measurements (each tuple in parentheses contains (CodeMeaning, CodingSchemeDesignator, CodeValue)
:
Whole gland:
Finding: ("Entire Gland", "SRT", T-F6078)
Finding site: ("Prostate", "SRT", "T-9200B")
Peripheral zone:
Finding: ("Entire", "SRT", "R-404A4")
Finding site: ("Peripheral zone of the prostate", "SRT", "T-D05E4")
Suspected tumor tissue:
Finding: ("Abnormal", "SRT", "R-42037")
Finding site: ("Peripheral zone of the prostate", "SRT", "T-D05E4")
Normal tissue:
Finding: ("Normal", "SRT", "G-A460")
Finding site: Finding site: ("Peripheral zone of the prostate", "SRT", "T-D05E4")
Following the description of the top-level attributes for a measurement group is the list of individual measurements. Each measurement item must include the following attributes:
value
: the measurement value
quantity
, units
, and derivationModifier
: coded tuples describing the quantity. In our case, measurements are either volume of the segmented regions, or the mean value of the Apparent Diffusion Coefficient (ADC).
Volume:
The converter needs to run separately for each measurements JSON file. Switch to the folder where you placed the measurements JSON file (preferably this should be a folder separate from the other data). Then execute the following command:
This will create a <out-name>.SR.dcm
which contains the structured report with all measurements referencing the original images and the segmentations from which the measurements originated.
The first convenient place to start with generating JSON files for the dcmqi
SEG converter is always .
Peripheral zone: this one is more difficult - no code could be located. We needed to consult to find "Peripheral zone of the prostate", which has SCTID 279706003. Next, we looked up SRT code for SCTID in an older version of SNOMED (this was done by our good friend David Clunie!). Result:
Colors: these are up to the creator, but it is usually a good idea to pick colors that allow to easily differentiate regions. We use the color assignment summarized in .
Overall, the conversion of segmentation-based measurements into a DICOM Structured Report that follows is supported by the dcmqi
tool tid1500writer
. This tool expects as input the following items:
We do not provide a web application to populate such file, so at this moment, the easiest is to start with an example and update it as needed. Let's start with .
Mean ADC (note how the fact that we encode the mean value of ADC over the region is with the derivationModifier
):
Optionally, we can also encode the ADC maps generated on the GE imaging post-processing equipment and stored as MR objects. Instead, we can use the DICOM Parametric map object, since it allows to explicitly communicate quantity, units and the type of ADC fitting approach that was used. can be used directly for this conversion task.
Segment
metadata as shown in the webapp
1
2
3
4
5