Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This section of the user guide contains (a growing collection of) illustrative worked out use cases where dcmqi
was applied.
Segmentations of
grey and white matter (labels 4 and 5 in this NIfTI file)
left and right cerebral hemispheres (labels 2 and 3 in this NIfTI file)
overall brain mask (label 1 in this NIfTI file)
were produced by FreeSurfer by segmenting this DICOM MR series.
We will go over the process of converting this representation into a standardized DICOM Segmentation image object.
As a first step, it is always a good idea to visualize the input data. We recommend 3D Slicer for visualizing both DICOM and research format data.
Import DICOM dataset into Slicer DICOM database by unzipping the downloaded file with the MR series, opening DICOM Browser, and importing the directory with the DICOM MR series files (choose "Add link" when prompted).
Next, load segmentations stored in NIfTI. To do that, open "Add data", select the NIfTI segmentation files, check "Options" flag, and check "LabelMap" for each of the files in the list. This will tell Slicer that the files you are loading contain segmentations.
You should be able to see image with the segmentation for one of the labels shown in the overlay. If you mouse over the little pin icon in the corner of the slice viewer (orange arrow in the screenshot below), then expand the popup panel (red arrow), you can select each of the label files in the label selector (green arrow).
To figure out the value for the individual labels in the overlay, you can mouse over the label, and then look in the Data Probe panel in the bottom left corner of the Slicer window (red arrow in the screenshot below).
Using the hints discussed above, we can figure out that:
1 = overall brain mask label in brainmask-dcmspace.nii.gz
2 = left hemisphere label in lhrh-dcmspace.nii.gz
3 = right hemisphere label in lhrh-dcmspace.nii.gz
4 = gray matter label in gmwm-dcmspace.nii.gz
5 = white matter label in gmwm-dcmspace.nii.gz
We will need the information above in the process of data conversion!
In order to do the conversion, we need to pass extra metadata that would describe the segmentations to the converter.
We can use the web application that accompanies dcmqi
to populate the metadata JSON file: http://qiicr.org/dcmqi/#/seg.
We will need to create a new segment item in the web application for each of the labels we identified in the steps above. In our case, the structures we have are anatomical structures, and so this is what should be selected from the (short) list under "Segmented Property Category". "Segmented Property Type" contains the list of all items that belong to a category. Its drop-down contains a lot of items - start typing the name of the structure you are looking for, and it will suggest the matching names for you! The result should look like this:
Next let's add new segments for left and right hemisphere, and gray and white matter in this specific order. Add new segment by pushing the "+" button in the web application interface:
Note that you will not find codes for brain cerebral hemispheres in the drop-down - select brain ventricle as a placeholder, and select modifiers to be Left and Right for segments 2 and 3. We will put the proper codes at a later stage.
After populating all segments, you should see the following metadata for each of the individual segments in the webapp:
Next, click OK button in the webapp to generate the JSON metafile, and download it to your computer.
It should look something like this.
Some extra work is needed to finalize the JSON file you generated.
Open JSON file in your favorite text editor (we recommend Atom, a hackable text editor developed by GitHub). If you use Atom, we also recommend you install pretty-json extension, as it will help you validate your JSON file and make it more readable by adjusting indentation.
We will need to complete two tasks to finalize the JSON file.
First, we need to replace "Brain ventricle" and assign the proper code to the cerebral hemispheres. To find the code, open DICOM part 16 in your browser (be patient, this is a very large page!). Once loading is completed, search for the word "hemisphere" on that page. One of the hits leads us to table CID 12022, which contains the code we need!
Replace all occurrences of "Brain ventricle" in your JSON file with "Cerebral hemisphere", and all codes "T-A1600" with "T-A010F". The result should look like this.
Second, we need to reshuffle the content of the file. The "segmentAttributes" attribute of the JSON file is a list of lists, where each of the inner lists corresponds to one input file passed to the converter and contains metainformation for each of the segments. In our case, we have 3 input files. In the JSON file produced by the webapp, we have only one inner list with all of the segments. Edit the JSON file to have the first list contain just the brain mask, second list contain left and right cerebral hemispheres, and the third list containing gray and white matter. The resulting file should look like this.
Now we should be ready to run the converter!
You can follow the installation instructions and use either binary package for your platform, or a Docker image.
Considering you have the MR series in a folder called CMET-MRhead
(it will be in that folder if you downloaded the dataset using the link on top of this page), segmentations in the files as specified on top, and your metadata JSON in meta.json
, you should be able to run this command line to do the conversion:
Once the converter completes without errors, the resulting DICOM SEG object should be in freesurfer_seg.dcm
!
As a prerequisite, make sure you have QuantitativeReporting
extension of 3D Slicer installed! Without this extension, 3D Slicer will not know how to interpret DICOM SEG data!
Once QuantitativeReporting
is installed, import the directory with the freesurfer_seg.dcm
file. You should then see a new series with modality SEG
in the DICOM Browser:
Select this series and load it. Slicer will prompt you whether you want to load the MR series which is referenced from the segmentation object. 3D rendering of the segmentation surfaces will be shown automatically in 3D and as overlay in the slice viewers. Switch to Segmentations module of 3D Slicer to control visibility and opacity of individual segments, access information about the semantics of the segments, and perform other operations.
This is it!
Too many to count, of course ... TBD
See more detailed instructions on how to search for the terms outside DICOM in this section of the manual.
We identified a potential solution to this problem, and will add a feature to dcmqi
to store segmentations in a compressed form: https://github.com/QIICR/dcmqi/issues/244
Segment
metadata as shown in the webapp
1
2
3
4
5
This is user guide for the dcmqi (DICOM for Quantitative Imaging) library.
With dcmqi
you can:
Convert certain types of quantitative image analysis results into standardized DICOM form. This can help you with
sharing data in archives like TCIA
interoperating with PACS and commercial tools
supporting data queries to both image data and analysis results
standardizing data semantics
making your data self-described and better prepared for new uses
Convert DICOM data into a commonly used research file formats like JSON and NIfTI.
Integrate DICOM concepts into your analysis workflows so that intermediate results are encoded in a standardized manner, making it easy to share your data.
dcmqi
is distributed under 3-clause BSD license.
Check out our introductory tutorial!
You can communicate you feedback, feature requests, comments or problem reports using any of the methods below:
post a question to [dcmqi google
submit issue on dcmqi bug tracker
leave feedback directly in the dcmqi user manual
send email to Andrey Fedorov
To acknowledge dcmqi
in an academic paper, please cite
Herz C, Fillion-Robin J-C, Onken M, Riesmeier J, Lasso A, Pinter C, Fichtinger G, Pieper S, Clunie D, Kikinis R, Fedorov A. dcmqi: An Open Source Library for Standardized Communication of Quantitative Image Analysis Results Using DICOM. Cancer Research. 2017;77(21):e87–e90 http://cancerres.aacrjournals.org/content/77/21/e87.
If you like dcmqi
, please give the dcmqi repository a star on github. This is an easy way to show thanks and it can help us qualify for useful services that are only open to widely recognized open projects.
This work is supported primarily by the National Institutes of Health, National Cancer Institute, Informatics Technology for Cancer Research (ITCR) program, grant Quantitative Image Informatics for Cancer Research (QIICR) (U24 CA180918, PIs Kikinis and Fedorov). We also acknowledge support of the following grants: Neuroimage Analysis Center (NAC) (P41 EB015902, PI Kikinis) and National Center for Image Guided Therapy (NCIGT) (P41 EB015898, PI Tempany).
Fedorov A, Clunie D, Ulrich E, Bauer C, Wahle A, Brown B, Onken M, Riesmeier J, Pieper S, Kikinis R, Buatti J, Beichel RR. (2016) DICOM for quantitative imaging biomarker development: a standards based approach to sharing clinical data and structured PET/CT analysis results in head and neck cancer research. PeerJ 4:e2057 https://doi.org/10.7717/peerj.2057
Herz C, Fillion-Robin J-C, Onken M, Riesmeier J, Lasso A, Pinter C, Fichtinger G, Pieper S, Clunie D, Kikinis R, Fedorov A. dcmqi: An Open Source Library for Standardized Communication of Quantitative Image Analysis Results Using DICOM. Cancer Research. 2017;77(21):e87–e90 http://cancerres.aacrjournals.org/content/77/21/e87.
dcmqi
is a collection of libraries and command line tools. It is currently possible to install dcmqi
using one of the following approaches:
Using binaries for your platform from either the
Using
We are also working on providing a downloadable binaries of dcmqi
independent of the 3D Slicer application. This option will be supported in the future.
To start using dcmqi
, you can download the binary package with the command line converters for Windows, Linux and macOS from this page (both the latest release and pre-release corresponding to the current version of the source code): .
If you prefer using Docker, download the with docker pull qiicr/dcmqi
.
If you want to build dcmqi
yourself, or modify the source code - .
Many research studies such as imaging clinical trials or retrospective analysis of clinical data use a collection of databases, spreadsheets, and research data file formats such as NRRD, NIfTI, etc. As an example, a common practice to share segmentations is to provide NIfTI files along with a CSV file mapping label numbers to anatomical names.
As an alternative, you can use the itkimage2segimage
command and related tools in dcmqi
along with a JSON parameter file so that the segmentation output is described in terms of standardized vocabularies such as , and segmentation can be saved in DICOM format side by side and cross-referenced with the source image data. This can help remove ambiguity about the meaning of the results
Example command line:
Note for the Windows users
We recommend you use Windows PowerShell which is integrated in Windows 10 and further versions. The command line format on Windows will be different from that on Mac or Linux. Here is an example of the command line format on Windows (applied to using dcmqi
from Docker):
If you get quantitative imaging data in DICOM format but want to use it in MATLAB or ITK, you can use segimage2itkimage
to extract conventional NIfTI files of the segmentations or tid1500reader
to convert structured reports into JSON.
Example command line:
The goal of dcmqi
is to help you use DICOM for storing the results of quantitative image analysis.
Why would you want to use DICOM for your analysis results?
You should use DICOM if you want to improve interoperability of your data, to enhance the ability to automatically find and use the data by the computational tools, as well as support reuse of your data by individuals. These goals are widely recognized as important in the scientific community .
To highlight some of the specific advantages of using DICOM for storing analysis data, below we annotate the FAIR (Findable, Accessible, Interoperable, Reusable) Guiding Principles formalized by , as applied to quantitative image analysis, describing how research formats help meet the FAIR requirements, and contrasting those with the functionality provided by DICOM.
While speaking of "research formats", we refer primarily to the formats commonly used by researchers developing quantitative image analysis tools. Examples include , , , and .
Notable example of a domain-specific solution proposed for data storage is being developed for neuroimaging applications. We are not aware of a domain-specific solution developed for cancer imaging.
No.
DICOM is the international standard for biomedical images and image-related information.
Widespread use of DICOM in clinic has led to a common mis-conception that DICOM is only suitable for storing clinical images. DICOM applications are not restricted to clinical use only. The standard defines objects for storing image and image-related data, which are perfectly suitable to support imaging research.
Most common types of DICOM objects are those produced by the clinical imaging equipment: Computed Tomography (CT), Magnetic Resonance Imaging (MRI) or ultrasound (US) image objects.
Some of the examples of image-related information that can be stored using DICOM include:
radiation therapy dose, encoding dose distribution calculated by radiotherapy planning systems;
radiation therapy structure set, containing contours of patient structures derived from images;
segmentation image, which describes a classification of pixels in one or more referenced images;
parametric map image, which contains pixels derived by the analysis of image data;
measurements derived from an area in the image.
dcmqi
provides tools to convert into standardized representation for the types of image-derived data produced in quantitative imaging research.
dcmqi
does not support conversion to/from DICOM RT-STRUCT objects.
DICOM RT-STRUCT is a type of DICOM object that found wide adoption in the radiation therapy community to store the planar contours of the anatomical structures prepared for the purposes of radiation dose planning. Due to the large installation base of radiation therapy planning tools and history of their usage, there are large amounts of image data annotated with DICOM RT-STRUCT contours.
Many of the segmentation algorithms produce classification of image voxels, and thus do not naturally suit the representation by means of planar contours, requiring extra conversion operation and a potential for information loss. DICOM Segmentation image encodes classification of individual image voxels directly, and is thus more natural fit for voxel-based segmentation results.
In our view, there is no need for yet another implementation of RT-STRUCT conversion to and from research formats.
We also note that although reporting of measurements for an image region defined by RT-STRUCT is supported by DICOM TID1500 Structured report, we currently do not implement direct support for this in dcmqi
. We may consider adding this functionality in the future.
Users that have a need to report measurements over RT-STRUCT should first convert RT-STRUCT into a rasterized representation stored in an ITK-readable research format using any of the tools mentioned above, and then convert the result into DICOM Segmentation image using dcmqi
. The measurements reported over the region defined by the Segmentation image can then be stored as a DICOM TID1500 Structured report.
dcmqi
supports?The research formats we support are specific to the type of the object being converted, as summarized in the table below.
dcmqi
related to ...?Below is the list of various tools that are used by medical imaging researchers, and description of how they relate to dcmqi
:
dcmqi
for patient care?dcmqi
is intended for research work and has no FDA clearances or approvals of any kind. It is the responsibility of the user to comply with all laws and regulations (and moral/ethical guidelines) when using dcmqi
.
Image-derived objects, such as segmentations or parametric map, inherit some of the attributes from the image they were derived from. Such attributes include, for example, those from the "General Study" and "Patient" modules. In the situation where some of those attributes that are mandatory, but are not present in the source images, converter will fail. The reason for this is that the converter will not write out an invalid object, and a valid object cannot be constructed when mandatory attributes are missing.
As an example, here is an error that would be generated by the itkimage2segimage
converter when a mandatory "CodingSchemeDesignator" attribute is missing in the source DICOM image:
To fix this issue, the following command can be used:
The way 3D Slicer works at the moment is that you cannot scroll over the slices of the segmentation unless your segmentation is accompanied by the image that was segmented.
Try importing the DICOM image you segmented into the Slicer DICOM database, and load both the segmentation and image series.
Tutorials below are accompanied by the narrated video instructions, pointers to the components you will need to install and sample data to follow along.
If you are designing a new analysis workflow from scratch, you can consider using DICOM as the format to store intermediate results. If you look at the , you can see how to create analysis results in native DICOM format that use standard terminologies and units, while retaining their linkage to the original input data. You could consider using dcmqi
utilities to convert to and from research formats.
[1] Wilkinson et al. 2016. The FAIR Guiding Principles for scientific data management and stewardship. Scientific data 3:160018. DOI: .
DICOM supports a wide range of biomedical applications, including preclinical, veterinary and small animal imaging. Preclinical small animal imaging was introduced to the standard in 2015, see .
We do not support RT-STRUCT, since there are established dedicated tools and libraries for handling DICOM RT-STRUCT, with some of the most notable examples include extension of , , and .
- does not provide tools for conversion of DICOM objects supported by dcmqi
. dcmqi
uses ITK as a lower-level component for reading and writing research formats, and for image data operations.
, - provide attribute- and SR tree-level C++ API for interacting with DICOM data; provide general-purpose command line tools for converting DICOM objects into human-readable list of attribute, and editing individual attributes of a DICOM dataset; do not provide tools for generating DICOM objects from research formats. dcmqi
is using DCMTK as the lower-level component to operate on DICOM data.
Toolkit - provides attribute- and module-level Java API for interacting with DICOM objects; does not provide conversion tools for generating DICOM Segmentation image objects, Parametric maps, or volumetric measurements reports from research formats
- provides attribute-level C API for interacting with DICOM objects, and general-purpose DICOM editing tools that can be used to change individual attributes of the dataset. Does not provide conversion tools for generating DICOM Segmentation image objects, Parametric maps, or volumetric measurements reports from research formats
- provides attribute-level Python API for interacting with DICOM objects; does not provide conversion tools for generating DICOM Segmentation image objects, Parametric maps, or volumetric measurements reports from research formats
- provides interactive application to load and process DICOM data; includes dcmqi
as an extension; uses dcmqi
to perform conversion of the objects dcmqi
supports.
- provides interactive application to visualize and annotate DICOM data; by design, does not provide conversion tools; uses dcmqi
to perform conversion of DICOM TID1500 SR objects supports; uses attribute-level API of to implement support of DICOM Segmentation objects and DICOM Parametric maps.
- a software platform for developing and sharing research results in radiation therapy treatment planning. Supports DICOM RT-STRUCT annotation format. Does not support DICOM Segmentation image, Parametric map, or volumetric measurements.
Make sure you install the .
To resolve this situation, you can patch the source image to add the missing attributes. To do such patching we recommend the DCMTK tool.
Examining the content of the source image, indeed this attribute is missing, but :
Note the "99" prefix used for the coding scheme designator. Unless you know which coding scheme the specific code belongs to, you should always use this prefix to indicate that the coding scheme is "private" (not one of ).
If you agree this behavior is confusing, add your voice to .
We use JSON to represent metadata that is either passed to the converter into DICOM, or that is extracted from DICOM representation.
As such, you will need to have some understanding of both DICOM and JSON if you want to use dcmqi.
We provide the following tools that can help you use dcmqi more effectively.
dicom-dump is an extension for Atom text editor that can be used to explore the content of DICOM data
dcmqi validators web app can be used to see examples of metadata JSON for specific object types, to explore the schema, and to validate your own metadata JSON against the schema
You can check out our Learning resources section for some reference material about JSON and DICOM that we found helpful.
FAIR Guiding principle | Research formats | DICOM |
To be Findable: |
F1. (meta)data are assigned a globally unique and persistent identifier | usually not assigned | each object has a unique identifier |
F2. data are described with rich metadata (defined by R1 below) | minimal metadata sufficient to solve specific task (e.g., image resolution and orientation) | metadata is stored in standardized attributes describing versatile aspects of the data (the subject being imaged, processing details, references to related objects, etc.) |
F3. metadata clearly and explicitly include the identifier of the data it describes | metadata describing the subject may be stored separately from the analysis result, and cross-linked by means of file name or similar mechanism, creating opportunity for inconsistencies and errors | metadata is stored in the same object as the processing result |
F4. (meta)data are registered or indexed in a searchable resource | problem-specific solutions |
To be Accessible: |
A1. (meta)data are retrievable, by their identifier using a standardized communication protocol | problem-specific solutions |
A1.1. the protocol is open, free, and universally implementable | no comparable protocols have been proposed and implemented in widely accessible solutions | yes |
A1.2. the protocol allows for an authentication and authorization procedure, where necessary | no comparable protocols have been proposed and implemented in widely accessible solutions | DICOMWeb can be integrated with existing authentication protocols defined by other standards |
A2. metadata is accessible, even when the data is no longer available | no | not applicable, since metadata is stored alongside the data in the same object |
To be Interoperable: |
I1. (meta)data use a formal, accessible, shared, and broadly applicable language for knowledge representation | domain-specific solutions | DICOM is a formal, accessible, shared and broadly used standard |
I2. (meta)data use vocabularies that follow FAIR principles | domain-specific solutions |
I3. (meta)data includes qualified references to other (meta)data | usually, no | derived objects can include pointers to the datasets used in the derivation, including the purpose of reference |
To be Reusable: |
R1. (meta)data are richly described with a plurality of accurate and relevant attributes | usually, no | data attributes that need to be included for a specific object are defined by the standard, as a result of the community discussion and consensus process; the process of amending the standard is formalized and open |
R2. (meta)data are released with a clear and accessible data usage license | not applicable; data usage license is selected by the data provider | not applicable; data usage license is selected by the data provider; the DICOM standard itself is available free of charge, and its implementation is not restricted by any licenses |
R1.2. (meta)data are associated with detailed provenance | usually, no | composite context is preserved across imaging and derived datasets describing patient and acquisition details; provenance-related attributes are included, depending on the specific object type |
R1.3. (meta)data meet domain-relevant community standards | domain-specific solutions | DICOM is the main standard in the medical imaging domain |
Object type | Supported research formats | DICOM object |
Segmentation image |
Parametric map image |
Volumetric measurements |
You can access the functionality provided by dcmqi
in 3D Slicer using QuantitativeReporting
extension. Specifically, you can use QuantitativeReporting
to prepare segmentations of DICOM imaging series, define semantics of the segmentation, calculate measurements and export the result as DICOM. You can also load DICOM segmentations, measurement structured reports and parametric maps with QuantitativeReporting
, which is using dcmqi
to support data conversion. See user guide of QuantitativeReporting
for further details.
dcmqi
itself is also available as a 3D Slicer extension. It provides the libraries and converters that can be integrated with other 3D Slicer extensions. If you are developing an extension for 3D Slicer, you can access dcmqi
converters by specifying dependency on dcmqi
3D Slicer extension.
QuantitativeReporting
should be available as an extension in the nightly version of 3D Slicer, starting November 2016. We always suggest that you use the latest (nightly) version of the software when you try it for the first time.
Download the latest nightly release for your platform from http://download.slicer.org.
Do NOT use installers tagged as "Stable Release"!
After downloading, follow the installation procedure for your platform.
If you use Mac, make sure you move the Slicer application to the Applications folder on your computer before launching it!
Once installed, open 3D Slicer Extension Manager by clicking the icon as shown below.
Search for QuantitativeReporting
and install the extension by clicking the INSTALL button.
It is currently not possible to upgrade an extension without upgrading the 3D Slicer application.
If you need to upgrade the software, download the latest nightly release of 3D Slicer and install QuantitativeReporting
as explained above.
In this use case we will summarize the approach to encode segmentations of various structures and measurements derived using those segmentations from multi-parametric Magnetic Resonance Imaging (MRI) of the prostate.
More specifically, we will discuss encoding of the imaging-derived data discussed in the following paper:
Fedorov A, Vangel MG, Tempany CM, Fennessy FM. Multiparametric Magnetic Resonance Imaging of the Prostate: Repeatability of Volume and Apparent Diffusion Coefficient Quantification. Investigative Radiology; 2017;52(9):538–546 .
The imaging-derived data discussed in that paper consists of the two components:
Segmentations of the following structures:
Whole gland of the prostate
Peripheral zone of the prostate
Suspected tumor area of the prostate within the peripheral zone of the prostate
Normal-appearing area of the peripheral zone
Segmentation-based measurements: the manuscript is concerned with evaluating the repeatability of the
mean volume of the regions listed above, and
mean Apparent Diffusion Coefficient (ADC) values calculated over the segmentation-defined regions.
Each segmented structure is saved in an individual itk nrrd format file.
It is advisable to organize the original dicom image files in a directory structure like <Patient>/<Study>/<Series>/orig-img-dicom/
. Since one required input of the converter are all dicom files which represent the original image on which the segmentation was created. Organizing the data as described above will allow us to just provide the correct series directory instead of listing all files of that series.
All segmentations corresponding to a series should then be put into a folder like <Patient>/<Study>/<Series>/segmentations/
. Into this folder we will also put the meta-information JSON file required to convert the segmentations.
The most confusing part is typically how to find the codes to encode specific items. In our case, some structures have codes already included in the DICOM standard, while some other codes had to be looked up in SNOMED.
Make sure to select DICOM master list in the "Segmentation Category Type Context Name". This will give you access to all of the codes in the standard.
Whole prostate gland: we encode anatomic structure (short selection list in the Category selector), and can quickly locate the code for the prostate:
Suspected tumor tissue: the most suitable code is "Lesion", but it is not available in the web interface we used before ... We will need to fix this... Note that for "Morphologically altered structure" we can (and should!) also specify "AnatomicRegionSequence" to encode the location of the lesion. In our case, all of the lesions are in the peripheral zone of the prostate.
Normal tissue: here we can use generic "Tissue" for category, and a special code (which again is not available in our web interface) for Normal tissue:
Other attributes are rather trivial to populate:
BodyPartExamined
should be PROSTATE
(all caps is important! ... we should add explanation to this ...)
SegmentDescription
can be populated with an abbreviated name reflecting the structure being segmented (note: this attribute can be at most 64 characters long!)
SegmentAlgorithmName
not applicable, since segmentation was done manually
SegmentAlgorithmType
to MANUAL
, since that is how segmentations were created
An almost complete meta-information JSON file for these cases could look like this:
Note that this file contains two placeholders which still need to be replaced with the correct value for the segmentations we are trying to convert: @TimePoint@
and @SeriesNumber@
. While all other properties in the JSON file are valid for all segmentation files, these two properties will differ for different segmentation files:
In this particular case we have segmentations of two timepoints, so we have to make sure the meta-information JSON file we use when running the converter has the correct timpoint encoded.
Segmentations based on one series in a study should also have a unique series number within that study. We can follow the following formula (or similar) to assign series number: <SeriesNumber of the image series>+1000
.
This means we will need several slightly different JSON files to perform the conversion. The best approach is to create them dynamically with a script that inserts the correct values for the placeholders. If we follow the data organization approach suggested above we should have one JSON file per <Patient>/<Study>/<Series>/segmentations/
folder.
The converter needs to run separately for each segmentation folder. Switch to a <Patient>/<Study>/<Series>/segmentations/
folder. Then run the converter like this:
This will place a file <out-name>.SEG.dcm
into the segmentations folder. The SEG object will contain all four segmentations.
Note that the order of the files for --inputImageList
has to exactly match the order of the segmentAttributes
list in the meta.json. If the order of files in --inputImageList
is different or contains less or more files, we need to adjust the meta.json accordingly.
DICOM image series that was used for segmentation
DICOM Segmentation image series containing the segmentation result
JSON file containing the individual measurements, and additional metadata needed by DICOM
Assuming the segmentation conversion above was successful, a JSON file is required that specifies metadata for the structured report overall, and for the individual measurements.
Before discussing how to initialize individual items, we need to decide how to organize measurements. There are at least two options here: 1. Save measurements for each combination of structure/image series as a separate DICOM SR document. 2. Save measurements for all structures segmented in a given series in a single DICOM SR document.
Considering we made a decision to save all segments (whole gland, peripheral zone, tumor region, etc) in a single DICOM SEG file, it is logical to follow the same pattern and store per-segment measurements in separate groups within the same DICOM SR.
Here are the items we will need to update:
Although SeriesNumber
is not formalized by DICOM, it is usually expected by the users that it should be unique. We can follow the following formula (or similar) to assign series number: <SeriesNumber of the image series being segmented>+2000
.
This item contains the name of a DICOM file that should be used to populate composite context (information about study, patient, equipment, which is the same for all series in a study) of the output DICOM object. We can set this item to be the Segmentation DICOM object, or any of the DICOM instances from the input image.
imageLibrary
should contain the list of all DICOM instances from the source image series being segmented. A component of the output SR document will include certain attributes of these instances, and will reference them by SOPInstanceUID
.
In our case, there was a single reader, so we can leave this item unchanged.
These items can also remain unchanged, since we share final measurements. The exception is timePoint
, which should have values of 1 for baseline, and 2 for the followup.
Next, we need to populate the list of Measurements
. Each of the items (measurement groups) in this list will contain a list of attributes that apply to all individual measurements within the group, and a list of individual measurements.
In our case, each measurement group will contain measurements calculated over a single segmented structure.
First, let's look at the top-level attributes:
TrackingIdentifier
is a human-readable description of the measurements group. We can use the pattern <structure name> measurements
, i.e., Whole gland measurements
etc.
These items should be propagated given the information about source image series and the segmentation generated in the previous conversion step. Availability of these items allows to link measurements with the results of segmentation and source image series.
ReferencedSegment
: The SegmentNumber for the corresponding segment in the DICOM segmentation object.
sourceSeriesInstanceUID
: The SeriesInstanceUID of the actual image from which the segmentation was created.
segmentationSOPInstanceUID
: The SOPInstanceUID of the segmentation object.
The next items -- Finding
and FindingSite
-- are code tuples that allows us to encode what was the region over which measurement was done, and where it was located. These items are somewhat similar to what we had to specify for encoding segmentation.
Here are the codes we can use for each of the structures over which we performed the measurements (each tuple in parentheses contains (CodeMeaning, CodingSchemeDesignator, CodeValue)
:
Whole gland:
Finding: ("Entire Gland", "SRT", T-F6078)
Finding site: ("Prostate", "SRT", "T-9200B")
Peripheral zone:
Finding: ("Entire", "SRT", "R-404A4")
Finding site: ("Peripheral zone of the prostate", "SRT", "T-D05E4")
Suspected tumor tissue:
Finding: ("Abnormal", "SRT", "R-42037")
Finding site: ("Peripheral zone of the prostate", "SRT", "T-D05E4")
Normal tissue:
Finding: ("Normal", "SRT", "G-A460")
Finding site: Finding site: ("Peripheral zone of the prostate", "SRT", "T-D05E4")
Following the description of the top-level attributes for a measurement group is the list of individual measurements. Each measurement item must include the following attributes:
value
: the measurement value
quantity
, units
, and derivationModifier
: coded tuples describing the quantity. In our case, measurements are either volume of the segmented regions, or the mean value of the Apparent Diffusion Coefficient (ADC).
Volume:
The converter needs to run separately for each measurements JSON file. Switch to the folder where you placed the measurements JSON file (preferably this should be a folder separate from the other data). Then execute the following command:
This will create a <out-name>.SR.dcm
which contains the structured report with all measurements referencing the original images and the segmentations from which the measurements originated.
Platform-specific packages of dcmqi
accompany the library starting from release v1.0.1
. You can download these packages for Windows, macOS and Linux at . Extract the package archive for your operating system to access command line tools.
One of the fundamental principles of DICOM is the use of controlled terminologies, or lexicons, or coding schemes (for the purposes of this guide, these can be used interchangeably). While using dcmqi
you will encounter various situations where you will need to select codes to describe the data you are converting into DICOM. In this and the following sections we give you an overview and describe the general principles of deciding on how to choose such codes.
Controlled terminologies define a set of codes, and sometimes their relationships, that are carefully curated to describe entities for a certain application domain. Consistent use of such terminologies helps with uniform data collection and is critical for harmonization of activities conducted by independent groups.
When codes are used in DICOM, they are saved as triplets that consist of
CodeValue: unique identifier for a term
CodingSchemeDesignator: code for the authority that issued this code
CodeMeaning: human-readable code description
DICOM relies on various sources of codes, all of which are listed in of the standard.
The first question while choosing the coding scheme is whether you will use an existing code, or define your own. Depending on your choice, follow one of the two recipes below.
If there is no matching code that is already included in DICOM, you can : this option often requires more effort, but it will make your resulting data consistent with the existing controlled terminologies, facilitating reasoning on the resulting data, and aggregation of the results collected from different sources. This “semantic” approach using standard codes allows for greater reuse and harmonization with other data sets, since the need for natural language parsing of plain text during “data mining” is obviated by the commonality of standard codes for standard entities, such as anatomical regions, types of tumor, etc. The choice of the coding scheme and specific codes will depend on the specific data conversion task.
: if you follow this option, you can either reuse an existing terminology which is not listed in the DICOM standard , or define your own terminology and set of codes. You can still produce the data that is harmonized with other sources, but to achieve this you will need to make sure all "data producers" follow your coding scheme! This approach is suitable when you are working on an application where no established terminology exists (e.g., no consensus within your community is reached), or when semantic interoperability is not a priority.
dcmqi
should build on Linux, Mac and Windows. You can confirm this is the case for the current version of the code by looking at the continuous integration (CI) platforms.
Linux build:
Windows build:
Mac OS X build:
Note that the failure icons indicate that something in the dashboard script failed - this could be build error, failed test, failed artifact upload, or failed download of a prerequisite. You will need to check the console output for the specific platform to see if there are problems with the build.
We use git/github to maintain the repository. You can clone the repository using this command:
git clone https://github.com/QIICR/dcmqi.git
If you are not familiar with git, there are many guides to help you get started, such as this one that should take about 10 minutes: .
developer environment for your platform (compiler, git)
recent version of
Both can be installed with npm as follows:
You can use this approach if you have (some of) the dependency libraries already built on your platform. dcmqi dependencies are
To reuse builds of those libraries, you will need to pass the corresponding variables to cmake
as shown in the example below:
To resolve this, check your global git settings. If you have autocrlf
set, you may try setting it to auto
.
is a project that automates deployment of applications inside software containers. Docker
application is defined by images that contain all of the components and steps needed to initialize the application instance. A container is a running instance of the image. We provide an image that contains the compiled dcmqi
library. By using dcmqi
Docker container you can use dcmqi
on any operating system without having to compile it. All you need to do is install Docker on your system, and download the dcmqi
Docker image.
You will first need to install Docker on your system following . Docker is available for Mac, Windows and Linux. For the most part Docker installation is straightforward, but some extra steps need to be taken on Windows as discussed below.
Note the :
you will need to have Windows 10 Pro or above
you will need to enable Hyper-V package (Docker will prompt you)
you will need to enable virtualization; to learn how to check if it is enabled, and if it is not - here is that may help you do that, but it assumes you can access your BIOS settings
IMPORTANT: You will also need to share the drive you will be using to communicate data to and from Docker image in Docker Settings as shown in the screenshot below.
Most likely you will experience the display of an error message similar to the one shown below.
If you have this error, make sure that the drive, where the HOST_DIR
is located, is shared:
right click onto the Docker task bar icon and choose "Settings"
choose "Shared Drives" from the left menu (a list of drives that are available to share will be displayed)
select the drive for your HOST_DIR
to be shared
confirm with Apply and continue
Pull the dcmqi
image to your system to instantiate the dcmqi
container:
You can now run any of the command line converter provided by dcmqi by passing the name of the converter as shown below:
Docker containers cannot directly access the filesystem of the host. In order to pass files as arguments to the dcmqi
converter and to access files that converters create, an extra step is required to specify which directories will be used for file exchange using the -v
argument:
The argument above will make the HOST_DIR
path available within the container at CONTAINER_DIR
location. The files that will be read or written by the converter run from the docker container should be referred to via the CONTAINER_DIR
path.
Assuming the docker image is installed, create an empty directory docker_test
.
Put the following test files from dcmqi source code repository into the docker_test
directory:
Run the itkimage2paramap
converter
The output DICOM object will be saved as docker_output_paramap.dcm
in the docker_test
directory.
general-purpose search and indexing of DICOM data is supported by every Picture Archival and Communications System (PACS) using DICOM Query and Retrieve protocol, or using REST-based protocol
general-purpose retrieval of DICOM data is supported by every Picture Archival and Communications System (PACS) using DICOM Query and Retrieve protocol, or using REST-based protocol
can reuse vocabularies defined elsewhere, relying often on the terminology defined by , , , , and allows for integration with other vocabularies, including those defined by the user
All research volumetric image formats supported by the : , , , ; extra metadata is communicated using and constrained by .
All research volumetric image formats supported by the : , , , ; extra metadata is communicated using and constrained by .
Both measurements and associated metadata should be described using and constrained by . We are are planning to add support for as input format in the future.
The first convenient place to start with generating JSON files for the dcmqi
SEG converter is always .
Peripheral zone: this one is more difficult - no code could be located. We needed to consult to find "Peripheral zone of the prostate", which has SCTID 279706003. Next, we looked up SRT code for SCTID in an older version of SNOMED (this was done by our good friend David Clunie!). Result:
Colors: these are up to the creator, but it is usually a good idea to pick colors that allow to easily differentiate regions. We use the color assignment summarized in .
Overall, the conversion of segmentation-based measurements into a DICOM Structured Report that follows is supported by the dcmqi
tool tid1500writer
. This tool expects as input the following items:
We do not provide a web application to populate such file, so at this moment, the easiest is to start with an example and update it as needed. Let's start with .
Mean ADC (note how the fact that we encode the mean value of ADC over the region is with the derivationModifier
):
Optionally, we can also encode the ADC maps generated on the GE imaging post-processing equipment and stored as MR objects. Instead, we can use the DICOM Parametric map object, since it allows to explicitly communicate quantity, units and the type of ADC fitting approach that was used. can be used directly for this conversion task.
Note that you can also see the specific components and steps needed to build dcmqi by looking at the CI configuration scripts ( for Lunux, for Mac, and for Windows).
If you would like to run dcmqi tests, you will need to install few extra tools for validating JSON files. These tools depend on npm
installed as part of , and are the following:
With the superbuild approach, all of the will be build as part of the build process. This approach is the easiest, but also most time-consuming.
1. create dcmqi-superbuild
directory
2. configure the project by running cmake <dcmqi source directory>
from dcmqi-superbuild
3. run make
from the superbuild directory
Under certain conditions, line endings may be incorrectly initialized for your platform by the checkout process (reported by @CJGoch in ), which may result in errors like below:
The Dockerfile for qiicr/dcmqi
is available in the main repository of dcmqi
. It does not rely on any proprietary or non-open-source components.
(also see !)
itkimage2segimage
tool can be used to save the volumetric segmentation(s) stored as labeled pixels using any of the formats supported by ITK, such as NRRD or NIFTI, as a DICOM Segmentation Object (further referred to as SEG).
Most of the effort will be required to populate the content of the meta-information JSON file. You can use the helper web application that provides a user interface to help with populating the content of the metadata JSON file. The details are discussed below.
The structure of the metadata JSON is defined by this JSON-Schema file. Interpretation of JSON-Schema may require some effort, especially considering that this particular file uses externally defined items. It may be easier to start with an example JSON file that "instantiates" this schema, such as this one.
In the following, we will guide you through the contents of this file - line by line.
This opening line references the schema this parameter file should conform to. Make sure you include this line without changes!
These lines correspond to the metadata attributes that will be populated in the resulting DICOM SEG object. It is your choice how you want to populate those. There are certain constraints on the values of these attributes. If those constraints are not met, converter will fail. In the future, we will provide instructions for validating your meta-information file.
The remainder of the file is a nested list (top-level list corresponds to the input segmentation files, and the inner list corresponds to the individual segments within each file) that specifies metadata attributes for each of the segments that are present in the input segmentation files.
For each of the segments, you will need to specify the following attributes that are mandatory:
labelID
defines the value of the segment in the segmentation file that will be assigned attributes listed.
WARNING: labelID
is not stored in the output DICOM! The sole purpose of this attribute is to establish the connection between the labels encoded in the input ITK files and the metadata describing those labels (segments). The output DICOM files will have segments numbered consecutively starting from 1, and labelID
should not be used to encode the type of structure being segmented. What the segment actually represents is indicated by a set of "codes": SegmentedPropertyCategoryCodeSequence
, SegmentedPropertyTypeCodeSequence
, and SegmentedPropertyTypeModifierCodeSequence
(optionally), as discussed below.
Note that if you really wanted to preserve a particular identifier from a source format, though DICOM SegmentNumber is required to start from 1 and increase by 1 (and is used for internal reference within the segment instance), SegmentLabel
can be anything that fits within a 64 character string.
E.g., one could write:
and
or
or
or, what the standard recommends but does not mandate (use CodeMeaning
of SegmentedPropertyTypeCodeSequence
):
Note that the anatomic region (where the primary tumor is) can be coded separately.
SegmentDescription
is a short free-text description of the segment.
SegmentAlgorithmType
can be assigned to MANUAL
, SEMIAUTOMATIC
or AUTOMATIC
. If the value of this attribute is not MANUA
, SegmentAlgorithmName
attribute is required to be initialized!
This attribute should be used to assign short name of the algorithm used to perform the segmentation.
This attribute can be used to specify the RGB color with the recommended. Alternatively, RecommendedDisplayCIELabValue
attribute can be used to specify the color in CIELab color space.
SegmentedPropertyCategoryCodeSequence
and SegmentedPropertyCategoryCodeSequence
are attributes that should be assigned code tuples to describe the meaning of what is being segmented.
Each code tuple consists of the three components: CodeValue
, CodingSchemeDesignator
and CodeMeaning
. CodingSchemeDesignator
defines the "authority", or source of the code. Each CodeValue
should be unique for a given CodingSchemeDesignator
. CodeMeaning
is a human-readable meaning of the code. DICOM defines several coding schemes recognized by the standard listed in PS3.16 Section 8.
The task of selecting a code to describe a given segment may not be trivial, since there are implicit constraints/expectations on the values of these codes. As an example, the possible values of SegmentedPropertyTypeCodeSequence
are predicated on the value of the SegmentedPropertyCategoryCodeSequence
. It is also possible to define SegmentedPropertyTypeModifierCodeSequence
that can be used , for example, to define the laterality. In some situations, it is appropriate or required to also specify anatomical location of the segmentation (e.g., organ a tumor was segmented). The latter can be achieved using AnatomicRegionSequence
and AnatomicRegionModifierSequence
coded attributes.
To simplify selection of codes for defining semantics of the segment, we provide a helper web application that can be used to browse supported codes and automatically generate the corresponding section of the JSON file. When no suitable codes can be found, it is also permissible to define so called private, or local, coding schemes (see PS3.16 Section 8.2).
You can also see the dedicated section of the documentation discussing the various options of searching for the coded terms that are available to you.
In DICOM, the process of choosing a code, and a coding scheme, depends on the context. In the following, we will discuss some of the guidelines that can be used to choose suitable codes for the tasks of segmentation, measurements and parametric map conversion supported by dcmqi
.
While converting segmentations, you will need to define the following coded entities in the JSON file:
SegmentedPropertyCategoryCodeSequence
SegmentedPropertyTypeCodeSequence
SegmentedPropertyTypeModifierCodeSequence (when applicable)
AnatomicRegionSequence (when applicable)
AnatomicRegionModifierSequence (when applicable)
This looks overwhelming indeed! That's why we developed a web application to help you interactively choose the codes for each of those items. If you want to know the details, read on!
For each of these attributes, DICOM provides guidance on the selection of the suitable codes.
SegmentedPropertyCategoryCodeSequence codes are listed in context group ID (CID) CID 7150 (extensible, i.e., you are not forced to use only the codes from this selection)
SegmentedPropertyTypeCodeSequence codes are defined in CID 7151, which points to the application-specific CIDs that you can follow for the lists of codes
SegmentedPropertyTypeModifierCodeSequence is an optional attribute that augments segmented property type code. As an example, if SegmentedPropertyTypeModifier is "Kidney", SegmentedPropertyTypeModifier can be "Left" to specify laterality. More modifier codes are available in part 16 of the standard, as an example see CID 2 Anatomic Modifier.
AnatomicRegionSequence and its modifier do not always have to be specified. In some situations, information contained in SegmentedPropertyType is sufficient. E.g., if one is creating an atlas, where the properties are purely anatomical, and there is no more to say about them than the anatomy, then the anatomy goes in the Segmented Property Type Code Sequence. If one has different types of properties (e.g., necrosis, enhancing rim, gross tumor volume), but one wants to say something about the anatomy (e.g., where the tumor is at), then the property goes in Segmented Property Type Code Sequence and the anatomy goes in Anatomic Region Sequence. When you do want to specify AnatomicRegion, you can consult CID 4 Anatomic Region for the list of codes. Modifier for this code is specified as needed, following the same approach as discussed for SegmentedPropertyTypeModifierCodeSequence.
The web application we mentioned earlier provides an interactive interface to somewhat simplify the task of populating the codes discussed above.
Most, if not all codes listed in the contexts referenced earlier are from SNOMED CT, and have DICOM CodingSchemeDesignator SRT
. The reason for this is that Systematized Nomenclature of Medicine Clinical Terms (SNOMED® CT) coding scheme is the preferred controlled terminology used by DICOM . Most, if not all, of the codes used to define the entities above are from SNOMED CT. SNOMED CT maintains a systematically organized computer processable collection of medical terms providing codes, terms, synonyms and definitions used in clinical documentation and reporting. SNOMED CT maintains the hierarchy of relationships among the codes, which can be used for semantic reasoning on the data. As an example, see hierarchy of the entities related to the Liver code in the SNOMED CT Browser.
Note that SNOMED CT codes included in the DICOM standard are exempt from SNOMED CT license. The details are discussed in this blog post. In short:
Users and commercial and open source DICOM developers can be reassured that they may continue to use the subset of SNOMED concepts in the DICOM standard in their products and software, globally and without a fee or individual license.
TBD - work in progress
The following codes can be passed to describe the parametric map you are converting using itkimage2paramap converter:
QuantityValueCode: Quantity being measured at each pixel - select code from TID 7180, or introduce a private code.
MeasurementUnitsCode: Units of measurement. DICOM uses Unified Code of Units of Measurement (UCUM) code system (CodingSchemeDesignator UCUM
) to describe units. Some of the commonly used unit codes are listed in CID 7181, but as discussed in Part 16 Section 7.2.2, any of the UCUM codes can be used in DICOM.
DerivationCode: select code from CID 7203, or introduce a private code.
AnatomicRegionSequence and Modifier (optional): follow the same guidelines as discussed for Segmentation
anatomic codes selection.
MeasurementMethodCode (optional): code describing the model used for deriving the quantity.
ModelFittingMethodCode (optional): code describing the model fitting method.
In the future we plan to provide specific recipes that describe the sets of codes suitable for specific use-cases (e.g., estimating Apparent Diffusion Coefficient (ADC) from Diffusion-Weighted MRI, or performing pharmacokinetic modeling of the Dynamic Contrast-Enhanced MRI).
Relevant development of the codes related to ADC calculation can be found in DICOM Correction Proposal (CP)CP-1665. These codes are expected to become part of the standard in Spring 2017.
For now, the best place to start is this web application (select pm-schema
in the Validation schema selector) that you can use to choose an existing example and modify it to tailor to your use case.
dcmqi
provides command line tools to convert rasterized segmentations stored in commonly used research formats, such as NRRD or NIfTI, into DICOM Segmentation image storage (DICOM Segmentation) object.
DICOM Segmentations are organized as a lists of segments, where each segment corresponds to a separate object/label being segmented. Segments can overlap (i.e., a single voxel of the source image can have multiple labels). Each segment contains information about what it describes, and what method was used to generate it.
To perform the conversion to DICOM, the segmentation (image volume representing the labeling of the individual image voxels) needs to be accompanied by a JSON file that describes segmentation metadata (such as the one in this example), and by the DICOM dataset corresponding to the source image data being segmented. The source DICOM dataset is used to populate metadata attributes that are inherited by the segmentation (i.e., composite context), such as information about patient and imaging study.
Conversion from DICOM Segmentation to research formats produces one file per segment saving the labeled image raster in the research format, such as NRRD or NIfTI, and a metadata JSON file.
dcmqi provides a set of command line tools that perform conversion between research formats and DICOM.
While DICOM allows for reuse of the codes defined in other terminologies, such as SNOMED, as well as those defined in the DICOM standard itself, so called “private” codes can also be defined by the creator of the object, when no standard codes are available. Such private codes are distinguished by a coding scheme designator that must start with the “99” prefix.
As an example, consider Dr. Smith developed a new model for estimate ADC from diffusion MRI. Since the method is new, there is no standard code for it. Dr. Smith then can establish her own coding scheme designator, say, 99DRSMITH
, and define a new code as the following triple:
('SADC123','99SMITH','Smith Diffusion Model')
.
To find a suitable code in an existing terminology, you will need to know what terminology to search, and you will need a tool that would facilitate your search.
DICOM has a preference for using SNOMED-CT codes. If you find a code in SNOMED-CT that would fill an important gap, but is not in the standard, you can propose inclusion of that new code into the standard. While looking for codes in SNOMED-CT, note the following guideline from the standard (PS3.16, Section 8.1.1):
8.1.1 Use of SNOMED Anatomic ConceptsIn general, DICOM uses the anatomic concepts with the term "structure", rather than with the term "entire". This is an important distinction in SNOMED. "Entire" is a child concept to "structure", has a more restricted meaning, and typically is used in conjunction with treatments (e.g., "excision of entire right kidney"). It is used in distinction to other sibling children of the parent concept that may identify parts of the parent anatomic feature. Since imaging typically targets both the anatomic feature and the area around it, or sometimes just part of the anatomic feature, DICOM usually uses "structure" concepts that are more inclusive than the "entire" concepts.
However, keep in mind that:
the process of contributing something into the DICOM standard is lengthy and can take a year before you see your change in the standard text;
you will need to learn the procedures of contributing changes to the standard;
although the SNOMED-CT codes included in the standard are exempt from the licensing constraints of SNOMED-CT, the ontology defined by SNOMED-CT is NOT covered by the exemption. Therefore, you will still need a secondary ontology if you are concerned about the license, and want to do reasoning on your data.
If a certain term is not found in SNOMED, it might be possible to add it. If you would like to request adding a new term in the context of DICOM, you can contact DICOM secretariat at http://www.dicomstandard.org/contact/secretariat/, and they would then pass your request to the person responsible for the interactions with SNOMED. Also, each country has its SNOMED national member contact: http://www.snomed.org/our-customers/members, in case you want to make a non-DICOM related request.
Therefore, it can be more practical to find a suitable term in an ontology other than SNOMED-CT.
In this regard, David Clunie, the long-time Editor of the DICOM standard, gave the following guidance on what terminologies to consider when a gap in the standard is identified:
[...] we (DICOM) use FMA then NeuroNames as a fallback when there are no appropriate SNOMED codes (yet), and have contacts with each of the appropriate groups to extend the schemes as necessary. We have not used RadLex for anatomy, since it is all/mostly(?) in FMA anyway (and if I recall correctly, was derived from it, since the original RadLex protagonists had no interest in reinventing that wheel).
If you need to automate anatomical code mapping, consider using the UMLS as a tool ... frequently both SNOMED and FMA terms map to a common UMLS code which helps a lot.
You can also use the FMAIDs included in the RadLex ontology (http://purl.bioontology.org/ontology/RADLEX) to map from RadLex back to FMA (or the reverse, since I think the FMA OWL file also includes the RadLex RIDs), then to UMLS and on to SNOMED, and indeed then undo the pre-coordinated laterality (if necessary) using the SNOMED hierarchy.
To search existing terminologies, you can consider using the following tools that search across different ontologies:
With both of these search engines you have an option of the advanced search to restrict the terms to a specific ontology.
There are also some search tools that provide searches for the individual ontologies (such as FMA), but in some instances their search capabilities are not flexible enough, and as such we recommend BioPortal and OLS. Unfortunately, neither BioPortal nor OLS include Neuronames, so you will need to use the specialized search tool for that ontology: http://braininfo.rprc.washington.edu/.
Here is an illustrative example of searching for a term "anterior cingulate gyrus", which is not included in DICOM (note that you can use AnatomicRegionModifier
to encode laterality of the structure).
Using BioPortal, go to the Advanced search, put the search term in the search box, and specify FMA in the "Ontologies" selector:
The search is successful, leading to this entry, which includes the FMA ID 61916. Coding scheme designator for FMA is FMA
, so the you can use the following code to describe the item:
(61916, FMA, "Anterior cingulate gyrus")
Using OLS, you can select the specific ontology, and search specifically in FMA for the same term, which will result in the same code:
Note that OLS is (as of writing this) using a "slimmed down version of FMA", but for common purposes perhaps this should still be sufficient.
Definition of post-coordinated codes from UMLS: https://utsreport.nlm.nih.gov/umlsreport/sourceDocs/postcoordinated.html
The following section is intended for the users of command-line tools provided by dcmqi.
We are continuously working on the content of this guide. It will change and improve in the future.
Your feedback is very important in improving the quality of this guide.
To contribute your feedback, you can initiate a discussion for a specific paragraph of text. If you mouse over the paragraph while reading the web version of the book on gitbooks.io, you should see a +
symbol to the right of the paragraph you are reading. You can click it and initiate a new discussion, as shown in the screenshot below.
Once you post your feedback, developers of this guide will receive automatic notifications, and will respond to your suggestions or concerns.
Note that you will need to sign in before you can participate in a discussion (gitbook accepts Facebook, Twitter, Google and Github authentication).
dcmqi
provides command line tools to convert lists of measurements calculated from the images for the regions defined by rasterized segmentations into DICOM representation. Specifically, the DICOM representation suitable for such data is DICOM Structured Reporting (SR) .
Each measurement is associated with a specific segment in the corresponding DICOM Segmentation object. For each measurements, Quantity, Units and Derivation (when appropriate) must be specified as coded tuples. Multiple measurements can be assigned in a list for the individual segment.
At the moment, the measurements must be specified in a JSON file, such as the one shown in . to provide support of the CSV format for bi-directional conversion of the measurements data.
You can use this online validator to check if the JSON file you are passing to the converter is conforming to the schema: .
dcmqi
provides command line tools to convert results of post-processing of the image data, such as by applying certain model to the data, into DICOM format. As an example, Apparent Diffusion Coefficient (ADC) maps derived by fitting various models to the Diffusion-Weighted Magnetic Resonance Imaging (DW-MRI) data have been shown promising in characterizing aggressiveness of prostate cancer. The result of conversion is .
Mandatory metadata that needs to be specified to enable conversion include:
Quantity being measured
Units of the quantity being measured
Measurement method
Each of these items, in addition to some other attributes, must be specified using coded values. An example of the metadata file is available .
This tool can be used to convert a DICOM Structured Report object that follows into a JSON representation of the measurements. The converter was developed and tested specifically to recognize SR TID1500 objects that store measurements derived from volumetric rasterized segmentations. It will not work for other use cases of TID1500.
This tool can be used to save measurements calculated from the image over a volume defined by image segmentation into a DICOM Structured Report that follows .
In the following, we will guide you through the contents of this file - line by line.
This opening line references the schema this parameter file should conform to. Make sure you include this line without changes!
These lines define top-level attributes of the resulting DICOM object. You can change these to adjust to your needs, subject to some constraints that are not covered here for now.
These two items contain lists of file names that should exist in the directories specified by the --compositeContextDataDir
and --imageLibraryDataDir
, correspondingly. You should include the file with the DICOM Segmentation object defining the segmented region in the compositeContext
attribute!
These are the attributes of either the person that performed the measurements. If you want to list the device instead of a person, it is also possible, but should be done differently. Please ask about details.
Values for VerificationFlag
can be one of VERIFIED
or UNVERIFIED
. CompletionFlag
values are either PARTIAL
or COMPLETE
.
activitySession
attribute can be used to encode session number, when, for example, the same structure was segmented multiple times. timePoint
can be used in the situation of longitudinal tracking of the measurements.
This is the beginning of the structure where the actual measurements are stored. The measurements are stored hierarchically, and can include 1 or more measurement groups, where each measurement group encodes one or more measurement items.
For each measurement group, you will need to define certain common attributes shared by all measurement items within that group:
TrackingIdentifier
is a human-readable string naming the group
ReferencedSegment
is the ID of the segment within the DICOM segmentation object that defines the region used to calculate the measurement.
SourceSeriesForImageSegmentation
is the SeriesInstanceUID of the original image series on which the segmentation was created.
segmentationSOPInstanceUID
is the SOPInstanceUID of the DICOM Segmentation object.
Finally, measurementItems
contains the list of individual measurement. Each measurement is encoded by specifying the following properties:
value
: the number, the actual measurement
quantity
: code triplet encoding the quantity.
units
: code triplet defining the units corresponding of the value. DICOM is using the Unified Code for Units of Measure (UCUM) system for encoding units.
derivationModifier
: code triplet encoding the quantity modifier.
*
segimage2itkimage
This tool can be used to convert DICOM Segmentation into volumetric segmentations stored as labeled pixels using research format, such as NRRD or NIfTI, and meta information stored in the JSON file format.
Here are some representative examples of DICOM Segmentations:
To download the files for the studies listed above:
install idc-index
package with pip install --upgrade idc-index
download the study by specifying StudyInstanceUID
(listed in the URLs above after the '=' sign) with idc download 1.2.840.113654.2.55.256011367872217445472654886973509892961
Most of the effort will be required to populate the content of the meta-information JSON file. Its structure is defined by JSON-Schema file. Interpretation of JSON-Schema may require some effort, especially considering that this particular file uses externally defined items. It may be easier to start with an example JSON file that "instantiates" this schema, such as .
Finding
is a triplet of (code, codingSchemeDesignator, codeMeaning) defining the finding over which the measurement is being performed. You can read more about how these triples are defined .
The most challenging part of encoding measurements is arguably the process of identifying the codes corresponding to the quantity and derivation modifier (if necessary). You may want to read the discussion on this topic on p.19 of . For practical purposes, you can study the measurements encoded in this example and follow the pattern: . In the future, we will add more details, more examples, and more user-level tools to simplify the process of selecting such codes.
Once you generated the output DICOM object using tid1500writer
tool, it is always a very good idea to validate the resulting object. For this purpose we recommend DicomSRValidator
tool from the :
You can also examine the content of the resulting document with various tools such as from the suite
or (more colorful!) from
You can also use in the to conveniently view the content without having to use the terminal.
If you are looking for publicly available examples of segmentation objects, or other DICOM images, you should check out (IDC) (see documentation ).
Segmentation of a lung nodule from the collection
viewer link:
TotalSegmentator segmentation results from the collection
viewer link:
itkimage2paramap
can be used to convert a parametric map provided in any of the formats supported by ITK, such as NRRD or NIFTI, as a DICOM Parametric Map image object.
Most of the effort will be required to populate the content of the meta-information JSON file. Its structure is defined by this JSON-Schema file. Interpretation of JSON-Schema may require some effort, especially considering that this particular file uses externally defined items. It may be easier to start with an example JSON file that "instantiates" this schema, such as this one.
In the following, we will guide you through the contents of this file - line by line.
These lines correspond to the metadata attributes that will be populated in the resulting DICOM Parametric Map image object. It is your choice how you want to populate those. There are certain constraints on the values of these attributes. If those constraints are not met, converter will fail. In the future, we will provide instructions for validating your meta-information file.
QuantityValueCode
, MeasurementUnitsCode
, MeasurementMethodCode
, AnatomicRegionSequence
are attributes (code tuples) to describe the meaning the pixels stored in this parametric map. AnatomicRegionSequence
, DerivedPixelContrast
, FrameLaterality
are the only attributes that are required. All others are optional.
Each code tuple consists of the three components: CodeValue
, CodingSchemeDesignator
and CodeMeaning
. CodingSchemeDesignator
defines the "authority", or source of the code. Each CodeValue
should be unique for a given CodingSchemeDesignator
. CodeMeaning
is a human-readable meaning of the code. DICOM defines several coding schemes recognized by the standard listed in PS3.16 Section 8.
dcmqi
is using pre-built binaries of ITK and DCMTK for Appveyor to reduce the overall build time of the library.
Every time the release of DCMTK/ITK needs to be updated, it is important to re-generate these binaries, otherwise Appveyor build will fail when there are API changes.
The modified version of DCMTK is located in the dcmqi branch of the dcmtk-dcmqi repository. The only difference from the stock DCMTK source code is the appveyor.yml
file. Thus, the steps are the following:
Check out the version of DCMTK that needs to be used
Rebase the dcmqi branch of dcmtk-dcmqi to the new version.
Update the name of the package to reflect the version and date of the DCMTK package to be created (deploy/release
section of the appveyor.yml
)
Push the updated dcmqi branch to QIICR/dcmtk-dcmqi
Appveyor build will be triggered automatically, and barring any build issues a new release will be uploaded to https://github.com/QIICR/dcmtk-dcmqi/releases.
The URL corresponding to DCMTK-dcmqi.zip in the release should then be used to first update https://github.com/QIICR/ITK-dcmqi/blob/dcmqi/appveyor.yml, which should result in re-generated ITK-dcmqi package under ITK-dcmqi releases in https://github.com/QIICR/ITK-dcmqi/releases. At that time, URLs for both ITK-dcmqi and DCMTK-dcmqi packages should be updated in the dcmqi
appveyor script file: https://github.com/QIICR/dcmqi/blob/master/appveyor.yml.
With the current architecture, it is not straightforward to add a new attribute to the schema, since numerous locations need to be updated. It might be most efficient to use past pull requests adding specific attributes as a guide. The following examples might be useful:
add SegmentLabel
to DICOM SEG: https://github.com/QIICR/dcmqi/pull/376
This tool can be used to convert a DICOM Parametric Map Image object into ITK image format, and generate a JSON file holding meta information.
You can experiment with the converter using the following objects:
ADC map image of the prostate (zip archive)
To generate a release, you will need to set up GPG keys on your platform to sign the commits. You can follow these GitHub instructions to complete that step.
Check that tests pass on all platforms
Edit CMakeLists.txt
and update DCMQI_VERSION_*
variables, update README.md
file to point to the updated version number for the Docker image.
Commit changes using message like cmake: Set DCMQI version to 1.0.7
Create corresponding tag:
Push tag and master (in that order) to trigger the release build and upload
Once new packages are generated, update documentation Quick Start section to point to the new package.
dcmqi
is using publish_github_release
from scikit-ci-addons to upload and manage releases from CI to GitHub.
This section of the scikit-ci-addons
documentation describes how to troubleshoot issues related to this mechanism.
The following section provides information to developers that want to contribute to the DCMQI development.
dcmqi is leveraging multiple open source projects. We are gratefully acknowledging those projects in the list below (list to be extended, links to be added):
These are some of the known limitations of dcmqi
:
It is currently not possible to use the converters when you don't have the source DICOM data; in the future, we are planning to add a mode of operation when all of necessary the metadata can be specified in the input JSON file.
We do not support conversion of the data (segmentations, parametric maps or measurements) derived from enhanced multiframe images.
Support of DICOM SR TID 1500 is limited to the measurements derived from the rasterized segmentations; we do not support all of the capabilities of TID 1500.
If you are on Windows:
if you are using Docker, make sure you read this section: , and that the drive you are using to pass data in and out of the docker is shared in the Docker settings
try specifying full absolute, not relative, path to the input files and directories
TODO: discuss potential issues with the input DICOM files, how to check, reference to DICOM validator docker