Configuration Files
Features Extraction
In MEDimage
, all the subpackages and modules need a specific configuration to be used correctly, so they respectively
rely on one single JSON configuration file. This file contains parameters for each step of the workflow (processing, extraction…).
For example, IBSI tests require specific parameters for radiomics extraction for each test.
You can check a full example of the file here:
notebooks/ibsi/settings/.
This section will walk you through the details on how to set up and use the configuration file. It will be separated to four subdivision:
General analysis Parameters
n_batch
A numerical value that determines the number of batches to be used in parallel computations, set to 0 for serial computation. |
|
type |
int |
e.g.
{
"n_batch" : 8
}
roi_type_labels
A list of labels for the regions of interest (ROI) to use in the analysis. The labels must match the names of the corresponding CSV files. For example, if you have a csv file named |
|
type |
List[str] |
e.g.
{
"roi_type_labels" : ["GTV"]
}
roi_types
A list of labels that describe the regions of interest, used to save the analysis results. The labels must accurately reflect the regions analyzed. For instance, if you conduct an analysis of a single ROI in a |
|
type |
List[str] |
e.g.
{
"roi_types" : ["GTVMassOnly"]
}
Pre-checks Parameters
The pre radiomics checks configuration is a set of parameters used by the DataManager
class. These parameters must be set in a nested
dictionary as follows:
{
"pre_radiomics_checks": {"All parameters go inside this dict"}
}
wildcards_dimensions
List of wild cards for voxel dimension checks (Read about wildcards here). Checks will be run for every wildcard in the list. For example |
|
type |
List[str] |
e.g.
{
"pre_radiomics_checks" : {
"wildcards_dimensions" : ["Glioma*.MRscan.npy", "STS*.CTscan.npy"],
}
}
wildcards_window
List of wild cards for intensities window checks (Read about wildcards here). Checks will be run for every wildcard in the list. For example |
|
type |
List[str] |
e.g.
{
"pre_radiomics_checks" : {
"wildcards_window" : ["Glioma*.MRscan.npy", "STS*.CTscan.npy"],
}
}
path_data
Path to your data ( |
|
type |
str |
e.g.
{
"pre_radiomics_checks" : {
"path_data" : "home/user/medimage/data/npy/sts",
}
}
path_csv
Path to your dataset csv file (Read more about the CSV File) |
|
type |
str |
e.g.
{
"pre_radiomics_checks" : {
"path_save_checks" : "home/user/medimage/checks",
}
}
path_save_checks
Path where the pre-checks results will be saved |
|
type |
str |
e.g.
{
"pre_radiomics_checks" : {
"path_csv" : "home/user/medimage/data/csv/roiNames_GTV.csv",
}
}
Note
initializing the pre-radiomics checks settings
is optional and can be done in the DataManager
instance initialization step.
Processing Parameters
Each imaging modality should have its own params dict inside the JSON file and should be organized as follows:
{
"imParamMR": {"Processing parameters for MR modality"},
"imParamCT": {"Processing parameters for CT modality"},
"imParamPET": {"Processing parameters for PET modality"}
}
box_string
Box of the ROI used in the workflow. |
||
type |
string |
|
options |
||
|
Use the full ROI |
|
type |
string |
|
|
Use the smallest box possible |
|
type |
string |
|
|
For example |
|
type |
string |
|
|
For example |
|
type |
string |
e.g.
{
"imParamCT" : {
"box_string" : "box7",
}
"imParamMR" : {
"box_string" : "box",
}
"imParamPET" : {
"box_string" : "2box",
}
}
interp
Interpolation parameters. |
||
type |
dict |
|
options |
||
|
size-3 list of the new voxel size |
|
type |
List[float] |
|
|
Lists of size-3 of the new voxel size for texture features (features will be computed for each list) |
|
type |
List[List[float]] |
|
|
Volume interpolation method (“linear”, “spline” or “cubic”) |
|
type |
string |
|
|
This option should be set only for CT scans, set it to 1 to round values to nearest integers (Must be a power of 10) |
|
type |
float |
|
|
ROI interpolation method (“nearest”, “linear” or “cubic”) |
|
type |
string |
|
|
Rounding value for ROI intensities. Must be between 0 and 1. |
|
type |
float |
e.g.
{
"imParamMR" : {
"interp" : {
"scale_non_text" : [1, 1, 1],
"scale_text" : [[1, 1, 1]],
"vol_interp" : "linear",
"gl_round" : [],
"roi_interp" : "linear",
"roi_pv" : 0.5
}
"imParamCT" : {
"interp" : {
"scale_non_text" : [2, 2, 3],
"scale_text" : [[2, 2, 3]],
"vol_interp" : "nearest",
"gl_round" : 1,
"roi_interp" : "nearest",
"roi_pv" : 0.5
}
"imParamPET" : {
"interp" : {
"scale_non_text" : [3, 3, 3],
"scale_text" : [[3, 3, 3]],
"vol_interp" : "spline",
"gl_round" : [],
"roi_interp" : "spline",
"roi_pv" : 0.5
}
}
}
reSeg
Resegmentation parameters. |
||
type |
dict |
|
options |
||
|
Resegmentation range, 2-elements list consists of minimum and maximum intensity value. Use |
|
type |
List |
|
|
Outlier resegmentation algorithm. For now |
|
type |
string |
e.g.
{
"imParamMR" : {
"reSeg" : {
"range" : [0, "inf"],
"outliers" : ""
}
},
"imParamCT" : {
"reSeg" : {
"range" : [-500, 500],
"outliers" : "Collewet"
}
},
"imParamPET" : {
"reSeg" : {
"range" : [0, "inf"],
"outliers" : "Collewet"
}
}
}
discretization
Discretization parameters. |
||
type |
dict |
|
options |
||
|
Discretization parameters for intensity histogram features |
|
type |
dict |
|
|
Discretization parameters for intensity volume histogram features |
|
type |
dict |
|
|
Discretization parameters for texture features |
|
type |
dict |
IH
Discretization parameters for intensity histogram features. |
||
type |
dict |
|
options |
||
|
Discretization algorithm: |
|
type |
string |
|
|
Bin size or bin number, depending on the algorithm used |
|
type |
int |
IVH
Discretization parameters for intensity volume histogram features. |
||
type |
dict |
|
options |
||
|
Discretization algorithm: |
|
type |
string |
|
|
Bin size or bin number, depending on the algorithm used |
|
type |
int |
texture
Discretization parameters for texture features. |
||
type |
dict |
|
options |
||
|
List of discretization algorithms: |
|
type |
List[string] |
|
|
List of bin sizes or bin numbers, depending on the algorithm used. Texture features will be computed for each bin number or bin size in the list |
|
type |
List[List[int]] |
e.g. for CT only (the parameters are the same for MR and PET):
{
"imParamCT" : {
"discretization" : {
"IH" : {
"type" : "FBS",
"val" : 25
},
"IVH" : {
"type" : "FBN",
"val" : 10
},
"texture" : {
"type" : ["FBS"],
"val" : [[25]]
}
}
}
}
compute_suv_map
Computation of the suv map for PET scans. Default |
||
type |
bool |
|
options |
||
|
Will compute suv map for PET scans. |
|
type |
bool |
|
|
Will not compute suv map and it must be computed before. |
|
type |
bool |
This parameter is only used for PET scans and is set as follows:
{
"imParamPET" : {
"compute_suv_map" : true
}
}
Note
This parameter concern PET scans only. MEDimage
only computes suv map for DICOM scans, since the computation relies on
DICOM headers for computation and assumes it’s already computed for NIfTI scans.
filter_type
Name of the filter to use on the scan. Empty string by default. |
||
type |
string |
|
options |
||
|
Filter images using |
|
type |
string |
|
|
Filter images using |
|
type |
string |
|
|
Filter images using |
|
type |
string |
|
|
Filter images using |
|
type |
string |
|
|
Filter images using |
|
type |
string |
e.g.
{
"imParamPET" : {
"filter_type" : "mean"
},
"imParamMR" : {
"filter_type" : "laws"
},
"imParamCT" : {
"filter_type" : "log"
}
}
Extraction Parameters
Extraction parameters are organized in the same wat as the processing parameters so each imaging modality should have its own parameters and the JSON file should be organized as follows:
{
"imParamMR": {"Extraction params for MR modality"},
"imParamCT": {"Extraction params for CT modality"},
"imParamPET": {"Extraction params for PET modality"}
}
glcm dist_correction
glcm features weighting norm. by default |
||
type |
Union[bool, str] |
|
options |
||
|
Will use |
|
type |
string |
|
|
Will use |
|
type |
string |
|
|
Will use |
|
type |
string |
|
|
Will use discretization length difference corrections as used by the Institute of Physics and Engineering in Medicine. |
|
type |
bool |
|
|
|
|
type |
bool |
e.g.
{
"imParamMR" : {
"glcm" : {
"dist_correction" : false
}
},
"imParamCT" : {
"glcm" : {
"dist_correction" : "chebyshev"
}
},
"imParamPET" : {
"glcm" : {
"dist_correction" : "euclidean"
}
}
}
glcm merge_method
glcm features aggregation method. by default |
||
type |
string |
|
options |
||
|
Features are extracted from a single matrix after merging all 3D directional matrices. |
|
type |
string |
|
|
Features are extracted from a single matrix after merging 2D directional matrices per slice, and then averaged over slices. |
|
type |
string |
|
|
Features are extracted from a single matrix after merging 2D directional matrices per direction, and then averaged over direction |
|
type |
string |
|
|
Features are extracted from each 3D directional matrix and averaged over the 3D directions |
|
type |
string |
e.g.
{
"imParamMR" : {
"glcm" : {
"merge_method" : "average"
}
},
"imParamCT" : {
"glcm" : {
"merge_method" : "vol_merge"
}
},
"imParamPET" : {
"glcm" : {
"merge_method" : "dir_merge"
}
}
}
glrlm dist_correction
glrlm features weighting norm. by default |
||
type |
Union[bool, str] |
|
options |
||
|
Will use |
|
type |
string |
|
|
Will use |
|
type |
string |
|
|
Will use |
|
type |
string |
|
|
Will use discretization length difference corrections as used by the Institute of Physics and Engineering in Medicine. |
|
type |
bool |
|
|
|
|
type |
bool |
e.g.
{
"imParamMR" : {
"glrlm" : {
"dist_correction" : false
}
},
"imParamCT" : {
"glrlm" : {
"dist_correction" : "chebyshev"
}
},
"imParamPET" : {
"glrlm" : {
"dist_correction" : "euclidean"
}
}
}
glrlm merge_method
glrlm features aggregation method. by default |
||
type |
string |
|
options |
||
|
Features are extracted from a single matrix after merging all 3D directional matrices. |
|
type |
string |
|
|
Features are extracted from a single matrix after merging 2D directional matrices per slice, and then averaged over slices. |
|
type |
string |
|
|
Features are extracted from a single matrix after merging 2D directional matrices per direction, and then averaged over direction |
|
type |
string |
|
|
Features are extracted from each 3D directional matrix and averaged over the 3D directions |
|
type |
string |
e.g.
{
"imParamMR" : {
"glrlm" : {
"merge_method" : "average"
}
},
"imParamCT" : {
"glrlm" : {
"merge_method" : "vol_merge"
}
},
"imParamPET" : {
"glrlm" : {
"merge_method" : "dir_merge"
}
}
}
ngtdm dist_correction
ngtdm features weighting norm. by default |
||
type |
bool |
|
options |
||
|
Will use discretization length difference corrections as used by the Institute of Physics and Engineering in Medicine. |
|
type |
bool |
|
|
|
|
type |
bool |
e.g.
{
"imParamMR" : {
"ngtdm" : {
"dist_correction" : false
}
},
"imParamCT" : {
"ngtdm" : {
"dist_correction" : true
}
},
"imParamPET" : {
"ngtdm" : {
"dist_correction" : true
}
}
}
Filtering parameters
Filtering parameters are organized in a separate dictionary, each dictionary contains
parameters for every filter of the MEDimage
:
{
"imParamFilter": {
"mean": {"mean filter params"},
"log": {"log filter params"},
"laws": {"laws filter params"},
"gabor": {"gabor filter params"},
"wavelet": {"wavelet filter params"},
"textural": {"textural filter params"}
}
}
mean
Parameters of the mean filter |
||
type |
dict |
|
options |
||
|
Dimension of the imaging data. Usually 3. |
|
type |
int |
|
|
If |
|
type |
bool |
|
|
Size of the filter kernel. |
|
type |
int |
|
|
Padding mode, default |
|
type |
string |
|
|
Saving name added to the end of every radiomics extraction results table (Only if the filter was applied). |
|
type |
string |
e.g.
{
"imParamFilter" : {
"mean" : {
"ndims" : 3,
"orthogonal_rot": false,
"size" : 5,
"padding" : "symmetric",
"name_save" : "mean5"
}
}
log
Parameters of the laplacian of Gaussian filter |
||
type |
dict |
|
options |
||
|
Dimension of the imaging data. Usually 3. |
|
type |
int |
|
|
Standard deviation of the Gaussian, controls the scale of the convolutional operator. |
|
type |
float |
|
|
If |
|
type |
bool |
|
|
Padding mode, default |
|
type |
string |
|
|
Saving name added to the end of every radiomics extraction results table (Only if the filter was applied). |
|
type |
string |
e.g.
{
"imParamFilter" : {
"log" : {
"ndims" : 3,
"sigma" : 1.5,
"orthogonal_rot" : false,
"padding" : "constant",
"name_save" : "log_1.5"
}
}
laws
Parameters of the laws filter |
||
type |
dict |
|
options |
||
|
List of string of every 1D filter to use for the Laws kernel creation. Possible 1D filters: |
|
type |
List[str] |
|
|
The Chebyshev distance that will be used to create the laws texture energy image. |
|
type |
float |
|
|
If |
|
type |
bool |
|
|
If |
|
type |
bool |
|
|
If |
|
type |
bool |
|
|
Padding mode, default |
|
type |
string |
|
|
Saving name added to the end of every radiomics extraction results table (Only if the filter was applied). |
|
type |
string |
e.g.
{
"imParamFilter" : {
"laws" : {
"config" : ["L5", "E5", "E5"],
"energy_distance" : 7,
"rot_invariance" : true,
"orthogonal_rot" : false,
"energy_image" : true,
"padding" : "symmetric",
"name_save" : "laws_l5_e5_e5_7"
}
}
Note
The order of the 1D filters used in laws filter configuration matter, because we use the configuration list to compute the outer product and the outer product is not commutative.
gabor
Parameters of the gabor filter |
||
type |
dict |
|
options |
||
|
Standard deviation of the Gaussian envelope, controls the scale of the filter. |
|
type |
float |
|
|
Wavelength or inverse of the frequency. |
|
type |
float |
|
|
Spatial aspect ratio. |
|
type |
float |
|
|
Angle of the rotation matrix. |
|
type |
str |
|
|
If |
|
type |
bool |
|
|
If |
|
type |
bool |
|
|
Padding mode, default |
|
type |
string |
|
|
Saving name added to the end of every radiomics extraction results table (Only if the filter was applied). |
|
type |
string |
e.g.
{
"imParamFilter" : {
"gabor" : {
"sigma" : 5,
"lambda" : 2,
"gamma" : 1.5,
"theta" : "Pi/8",
"rot_invariance" : true,
"orthogonal_rot" : true,
"padding" : "symmetric",
"name_save" : "gabor_5_2_1.5"
}
}
Note
gamma
parameter should be radian but must be specified as a string, for example \(\frac{\pi}{2}\)
should be specified as “Pi/2”.
wavelet
Parameters of the gabor filter |
||
type |
dict |
|
options |
||
|
Dimension of the imaging data. Usually 3. |
|
type |
int |
|
|
Wavelet name used to create the kernel. The Wavelet families and built-ins can be found here. Custom user wavelets are also supported. |
|
type |
string |
|
|
String of the 1D wavelet kernels ( |
|
type |
string |
|
|
The number of decomposition steps to perform. |
|
type |
int |
|
|
If |
|
type |
bool |
|
|
Padding mode, default |
|
type |
string |
|
|
Saving name added to the end of every radiomics extraction results table (Only if the filter was applied). |
|
type |
string |
e.g.
{
"imParamFilter" : {
"wavelet" : {
"ndims" : 3,
"basis_function" : "db3",
"subband" : "LLH",
"level" : 1,
"rot_invariance" : true,
"padding" : "symmetric",
"name_save" : "Wavelet_db3_LLH"
},
}
textural
Parameters of the textural filter |
||
type |
dict |
|
options |
||
|
Texture features family. Only |
|
type |
string |
|
|
Discretization parameters for the texture features (Defined down below). |
|
type |
dict |
|
|
Wether to discretize the ROI locally or globally. |
|
type |
bool |
|
|
Filter size. |
|
type |
int |
|
|
Saving name added to the end of every radiomics extraction results table (Only if the filter was applied). |
|
type |
string |
Discretization (Textural filters)
Discretization parameters for intensity histogram features. |
||
type |
dict |
|
options |
||
|
Discretization algorithm: |
|
type |
string |
|
|
Bin number. Set if |
|
type |
int |
|
|
Bin size. Set if |
|
type |
int |
|
|
If |
|
type |
bool |
e.g.
{
"imParamFilter" : {
"textural" : {
"family" : "glcm",
"discretization": {
"type" : "FBN",
"bn" : null,
"bw" : 25,
"adapted" : true
},
"size" : 3,
"local" : true,
"name_save" : "glcm_local_fbn_25hu_adapted"
},
}
Example of a full settings dictionary
Here is an example of a complete settings dictionary:
—
Learning
This section will walk you through the details on how to set up the configuration file for the machine learning part of the pipeline. It will be separated to the following subdivisions:
Experiment Design Parameters
This set of parameters is used to define the experiment design (data splitting, splitting proportion…), it is organized as follows:
{
"testSets": ["Define method here"],
"method name": "Define method here"
}
Now let’s specify the parameters for the selected method; for instance, in the case of the Random
and CV
methods:
Splitting methods
Type of sets to create. |
||||
type |
object |
|||
properties |
||||
|
Random splitting method. |
|||
type |
object |
|||
properties |
||||
|
Method of splitting the data. |
|||
type |
string |
|||
options |
||||
|
The data will be randomly split |
|||
type |
string |
|||
|
The data will be split based on institutions |
|||
type |
string |
|||
|
Number of splits to create. |
|||
type |
int |
|||
|
If |
|||
type |
bool |
|||
|
Proportion of the test set. |
|||
type |
float |
|||
|
Seed for the random number generator. |
|||
type |
int |
|||
|
Cross-validation splitting method. |
|||
type |
object |
|||
properties |
||||
|
Number of folds to use. |
|||
type |
int |
|||
|
Seed for the random number generator. |
|||
type |
int |
Example
{
"Random": {
"method": "SubSampling",
"nSplits": 10,
"stratifyInstitutions": 1,
"testProportion": 0.33,
"seed": 54288
}
}
Data Cleaning Parameters
This set of parameters is used to define the data cleaning process Parameters, it is organized as follows:
{
"method name": {
"define parameters here"
},
"another method": {
"define parameters here"
}
}
Cleaning methods
Feature cleaning method name. |
||
type |
object |
|
properties |
||
|
Default cleaning method. |
|
type |
string |
Now let’s specify the parameters for the selected cleaning method; for instance, in the case of the default
method:
Chosen method’s parameters
Feature cleaning parameters. |
||||
type |
object |
|||
properties |
||||
|
Continuous feature cleaning parameters. |
|||
type |
object |
|||
properties |
||||
|
Maximum percentage cut-offs of missing features per sample. Samples with more missing features than this cut-off will be removed. |
|||
type |
float |
|||
|
Minimal coefficient of variation cut-offs over samples per variable. Variables with less coefficient of variation than this cut-off will be removed. |
|||
type |
float |
|||
|
Maximal percentage cut-offs of missing samples per variable. Features with more missing samples than this cut-off will be removed. |
|||
type |
float |
|||
|
Imputation method for missing values. Default is |
|||
type |
string |
|||
options |
||||
|
Impute missing values with the mean of the feature. |
|||
type |
string |
|||
|
Impute missing values with the median of the feature. |
|||
type |
string |
|||
|
Impute missing values with the a random value from the feature set. |
|||
type |
string |
Example
{
"default":
{
"feature": {
"continuous": {
"missingCutoffps": 0.25,
"covCutoff": 0.1,
"missingCutoffpf": 0.1,
"imputation": "mean"
}
}
}
Note
Note that you can add as many methods as you want, for other feature types (categorical, ordinal, etc.) and for other cleaning methods (e.g. PCA
).
Data Normalization Parameters
Data normalization aims to remove batch effects from the data. This set of parameters is used to define the data normalization process Parameters, it is organized as follows:
{
"standardCombat": {
"define parameters here"
}
}
Chosen method parameters
Normalization method name. |
||
type |
string |
|
options |
||
|
Standard Combat normalization method. |
|
type |
string |
Note
For now only the standardCombat
method is available and it does not require any parameters.
Feature Set Reduction Parameters
Feature set reduction consists of reducing the number of features in the data by removing correlated features, selecting important features, etc. This set of parameters is used to define the feature set reduction process Parameters, it is organized as follows:
{
"selected method": {
"define parameters here"
}
}
method name
Feature set reduction method name. |
||
type |
string |
|
options |
||
|
False discovery avoidance method. Read the paper. |
|
type |
string |
|
|
Balanced version of the False discovery avoidance method, where the selected number of features is the same for each table. |
|
type |
string |
Now let’s specify the parameters for the selected feature set reduction method; for instance, in the case of the FDA
method:
FDA method
Feature set reduction parameters. |
||||
type |
object |
|||
properties |
||||
|
FDA method’s parameters. |
|||
type |
object |
|||
properties |
||||
|
Number of splits to use for the FDA algorithm. |
|||
type |
int |
|||
|
Type of correlation to use for the FDA algorithm. Default is |
|||
type |
string |
|||
options |
||||
|
Spearman correlation. |
|||
type |
string |
|||
|
Pearson correlation. |
|||
type |
string |
|||
|
Stability threshold to cut-off the unstable features at the beginning of the FDA algorithm. |
|||
type |
float |
|||
|
Threshold to cut-off the inter-correlated features. |
|||
type |
float |
|||
|
Minimum number of stable features to keep before inter-correlation step. |
|||
type |
int |
|||
|
Minimum number of inter-correlated features to keep. |
|||
type |
int |
|||
|
Minimum number of features to keep at the end of the FDA algorithm. |
|||
type |
int |
|||
|
Seed for the random number generator. |
|||
type |
int |
Example
{
"FDA": {
"nSplits": 100,
"corrType": "Spearman",
"threshStableStart": 0.5,
"threshInterCorr": 0.7,
"minNfeatStable": 100,
"minNfeatInterCorr": 60,
"minNfeat": 5,
"seed": 54288
}
}
Note
Only FDA
and FDAbalanced
methods are available for now and they share the same parameters.
Machine Learning Parameters
This set of parameters is used to define the machine learning process, algorithm, and parameters, it is organized as follows:
{
"selected algorithm": {
"define parameters here"
}
}
Now let’s specify the parameters for the selected machine learning algorithm; for instance, in the case of the XGBoost
algorithm:
ML Algorithm
Machine learning algorithm name. |
||||
type |
object |
|||
properties |
||||
|
XGBoost algorithm. |
|||
type |
object |
|||
properties |
||||
|
Variable importance threshold. Default is |
|||
type |
float |
|||
|
If |
|||
type |
float |
|||
|
Model’s optimization metric. Default is |
|||
type |
string |
|||
|
Method to use for the XGBoost algorithm. Default is |
|||
type |
string |
|||
options |
||||
|
Automated using PyCaret. |
|||
type |
string |
|||
|
Random search using a pre-defined grid of parameters. |
|||
type |
string |
|||
|
Grid search using a pre-defined grid of parameters. |
|||
type |
string |
|||
|
Name of the file to save the model. |
|||
type |
string |
|||
|
Seed for the random number generator. |
|||
type |
int |
Example
{
"XGBoost": {
"varImportanceThreshold": 0.3,
"optimalThreshold": null,
"optimizationMetric": "AUC",
"method": "pycaret",
"nameSave": "XGBoost03AUC",
"seed": 54288
}
}
Note
Only the XGBoost
algorithm is available for now.
Variables Definition
This set of parameters is used to define the variables to use for the machine learning process, it is organized as follows:
{
"selected variable": {
"define parameters here"
},
"combinations": [
"Insert combinations of variables here"
]
}
Variables
Variables to use for the machine learning process. |
||
type |
object |
|
properties |
||
|
List of variables combinations to use for the study. |
|
type |
List[str] |
For the selected variable, you can specify the following parameters:
selected variable
Variable name to use for the machine learning process. |
||
type |
object |
|
properties |
||
|
Type of variable to use. Must contain |
|
type |
string |
|
|
Path to the variable file. Use |
|
type |
string |
|
|
List of scans to use for the variable. For example is |
|
type |
List[str] |
|
|
List of ROIs to include in the study (will be used to identify the features fie). For example is |
|
type |
List[str] |
|
|
Radiomics level, the features file must end with this level. For example is |
|
type |
List[str] |
|
|
Data cleaning method to use for the variable. Default is |
|
type |
string |
|
|
Data normalization method to use for the variable. Default is |
|
type |
string |
|
|
Feature set reduction method to use for the variable. Default is |
|
type |
string |
Example
{
"var1": {
"nameType": "RadiomicsMorph",
"path": "setToMyFeaturesInWorkspace",
"scans": ["T1CE"],
"rois": ["GTV"],
"imSpaces": ["morph"],
"var_datacleaning": "default",
"var_normalization": "combat",
"var_fSetReduction": "FDA"
},
"combinations": [
"var1"
]
}