Train a model

Note.

Training on GPU requires NVIDIA Driver of version 390.xx or higher.

Usage examples

Execution format

catboost fit -f <file path> [optional parameters]

Options

OptionDescriptionDefault valueSupported processing units
Input file settings

-f

--learn-set

The path to the input file that contains the dataset description.

Required parameter (the path must be specified).

CPU and GPU

-t

--test-set

A comma-separated list of input files that contain the validation dataset description (the format must be the same as used in the training dataset).

Omitted. If this parameter is omitted, the validation dataset isn't used.

CPU and GPU

Restriction. Only a single validation dataset can be input if the training is performed on GPU (--task-type is set to GPU)

--cd

--column-description

The path to the input file that contains the column descriptions.

If omitted, it is assumed that the first column in the file with the dataset description defines the label value, and the other columns are the values of numerical features.

CPU and GPU

--learn-pairs

The path to the input file that contains the pair descriptions.

This information is used for calculation and optimization of Pairwise metrics.

Required parameter for the groupwise metrics (the path must be specified)

CPU and GPU

--test-pairs

The path to the input file that contains the description of test pairs (the format must be the same as used for describing the training pairs).

This information is used for calculation and optimization of Pairwise metrics.

Omitted (the test dataset is not used)

CPU and GPU

--learn-group-weights

The path to the input file that contains the weights of groups.

The dataset must contain the GroupId column in order to apply the file with the group weights.

The weights from this file take precedence if they are also specified in the Dataset description file.

Omitted (group weights are either read from the dataset description or set to 1 for all groups if absent in the input dataset)

CPU and GPU

--test-group-weights

The path to the input file that contains the weights of groups for the validation dataset.

The dataset must contain the GroupId column in order to apply the file with the group weights.

The weights from this file take precedence if they are also specified in the Dataset description file.

Omitted (group weights are either read from the dataset description or set to 1 for all groups if absent in the input dataset)

CPU and GPU

--delimiter

The delimiter character used to separate the data in the dataset description input file.

Only single char delimiters are supported. If the specified value contains more than one character, only the first one is used.

The input data is assumed to be tab-separated

CPU and GPU

--has-header

Read the column names from the first line if this parameter is set to True.

False

CPU and GPU

--params-file

The path to the input JSON file that contains the training parameters, for example:

{
    "thread_count": 4,
    "loss_function": "Logloss",
    "iterations": 400
}

Names of training parameters are the same as for the Python package or the R package.

If a parameter is specified in both the JSON file and the corresponding command-line parameter, the command-line value is used.

Omitted

CPU and GPU

--nan-mode

The method to process NaN values in the input dataset.

Possible values:
  • Forbidden — NaN values are not supported, their presence raises an exception.
  • Min — Each NaN float feature is processed as the minimum value from the dataset.
  • Max — Each NaN float feature is processed as the maximum value from the dataset.
Note.

The method for processing NaN values can also be set in the Custom quantization borders and NaN modes input file. Such values override the ones specified in this parameter.

Min

CPU and GPU

Training parameters

--loss-function

The metric to use in training. The specified value also determines the machine learning problem to solve. Some metrics support optional parameters (see the Objectives and metrics section for details on each metric).

Format:
<Metric>[:<parameter 1>=<value>;..;<parameter N>=<value>]
Supported metrics:
  • RMSE
  • Logloss
  • MAE
  • CrossEntropy
  • Quantile
  • LogLinQuantile
  • Lq
  • MultiClass
  • MultiClassOneVsAll
  • MAPE
  • Poisson
  • PairLogit
  • PairLogitPairwise
  • QueryRMSE
  • QuerySoftMax
  • YetiRank
  • YetiRankPairwise
For example, use the following construction to calculate the value of Quantile with the coefficient :
Quantile:alpha=0.1

RMSE

CPU and GPU

--custom-metric

Metric values to output during training. These functions are not optimized and are displayed for informational purposes only. Some metrics support optional parameters (see the Objectives and metrics section for details on each metric)..

Format:
<Metric 1>[:<parameter 1>=<value>;..;<parameter N>=<value>],<Metric 2>[:<parameter 1>=<value>;..;<parameter N>=<value>],..,<Metric N>[:<parameter 1>=<value>;..;<parameter N>=<value>]
Supported metrics:
  • RMSE
  • Logloss
  • MAE
  • CrossEntropy
  • Quantile
  • LogLinQuantile
  • Lq
  • MultiClass
  • MultiClassOneVsAll
  • MAPE
  • Poisson
  • PairLogit
  • PairLogitPairwise
  • QueryRMSE
  • QuerySoftMax
  • SMAPE
  • Recall
  • Precision
  • F1
  • TotalF1
  • Accuracy
  • BalancedAccuracy
  • BalancedErrorRate
  • Kappa
  • WKappa
  • LogLikelihoodOfPrediction
  • AUC
  • R2
  • MCC
  • BrierScore
  • HingeLoss
  • HammingLoss
  • ZeroOneLoss
  • MSLE
  • MedianAbsoluteError
  • PairAccuracy
  • AverageGain
  • PFound
  • NDCG
  • PrecisionAt
  • RecallAt
  • MAP
  • CtrFactor
Examples:
  • Calculate the value of CrossEntropy:

    CrossEntropy
  • Calculate the value of в with the coefficient 
    Quantile:alpha=0.1

Values of all custom metrics for learn and validation datasets are saved to the Metric output files (learn_error.tsv and test_error.tsv respectively). The directory for these files is specified in the --train-dir (train_dir) parameter.

None (do not output additional metric values)

CPU

--eval-metric

The metric used for overfitting detection (if enabled) and best model selection (if enabled). Some metrics support optional parameters (see the Objectives and metrics section for details on each metric).

Format:
<Metric>[:<parameter 1>=<value>;..;<parameter N>=<value>]
Supported metrics:
  • RMSE
  • Logloss
  • MAE
  • CrossEntropy
  • Quantile
  • LogLinQuantile
  • Lq
  • MultiClass
  • MultiClassOneVsAll
  • MAPE
  • Poisson
  • PairLogit
  • PairLogitPairwise
  • QueryRMSE
  • QuerySoftMax
  • SMAPE
  • Recall
  • Precision
  • F1
  • TotalF1
  • Accuracy
  • BalancedAccuracy
  • BalancedErrorRate
  • Kappa
  • WKappa
  • LogLikelihoodOfPrediction
  • AUC
  • R2
  • MCC
  • BrierScore
  • HingeLoss
  • HammingLoss
  • ZeroOneLoss
  • MSLE
  • MedianAbsoluteError
  • PairAccuracy
  • AverageGain
  • PFound
  • NDCG
  • PrecisionAt
  • RecallAt
  • MAP
Examples:
R2
Quantile:alpha=0.3
Optimized objective is used

CPU

-i

--iterations

The maximum number of trees that can be built when solving machine learning problems.

When using other parameters that limit the number of iterations, the final number of trees may be less than the number specified in this parameter.

1000

CPU and GPU

-w

--learning-rate

The learning rate.

Used for reducing the gradient step.

The default value is defined automatically based on the dataset properties and training parameters if all of the following conditions are met:

  • The binary classification machine learning problem is being solved.

  • Some parameters are not set (refer to the list)

The value is set to 0.03 otherwise.

CPU and GPU

-r

--random-seed

The random seed used for training.

0

CPU and GPU

--l2-leaf-reg

l2-leaf-regularizer

L2 regularization coefficient. Used for leaf value calculation.

Any positive values are allowed.

3

CPU and GPU

--bootstrap-type

Bootstrap type. Defines the method for sampling the weights of objects.

Supported methods:

  • Poisson (supported for GPU only)
  • Bayesian
  • Bernoulli
  • No
Bayesian

CPU and GPU

--bagging-temperature

Defines the settings of the Bayesian bootstrap. It is used by default in classification and regression modes.

Use the Bayesian bootstrap to assign random weights to objects.

The weights are sampled from exponential distribution if the value of this parameter is set to “1”. All weights are equal to 1 if the value of this parameter is set to “0”.

Possible values are in the range . The higher the value the more aggressive the bagging is.

1

CPU and GPU

--subsample
Sample rate for bagging. This parameter can be used if one of the following bootstrap types is defined:
  • Poisson
  • Bernoulli
0.66

CPU and GPU

--sampling-frequency

Frequency to sample weights and objects when building trees.

Supported values:
  • PerTree
  • PerTreeLevel
PerTreeLevel

CPU and GPU

--random-strength

Score the standard deviation multiplier. Use this parameter to avoid overfitting the model.

The value of this parameter is used when selecting splits. On every iteration each possible split gets a score (for example, the score indicates how much adding this split will improve the loss function for the training dataset). The split with the highest score is selected.

The scores have no randomness. A normally distributed random variable is added to the score of the feature. It has a zero mean and a variance that decreases during the training. The value of this parameter is the multiplier of the variance.

Do not use standard deviation multiplier

CPU and GPU

--use-best-model

If this parameter is set, the number of trees that are saved in the resulting model is defined as follows:
  1. Build the number of trees defined by the training parameters.
  2. Use the validation dataset to identify the iteration with the optimal value of the metric specified in  --eval-metric (eval_metric).

No trees are saved after this iteration.

This option requires a validation dataset to be provided.

True if a validation set is input (the -t or the --test-set parameter is defined) and at least one of the label values of objects in this set differs from the others. False otherwise.

CPU and GPU

--best-model-min-trees

The minimal number of trees that the best model should have. If set, the output model contains at least the given number of trees even if the best model is located within these trees.

Should be used with the --use-best-model parameter.

The minimal number of trees for the best model is not set

CPU and GPU

-n

--depth

Depth of the tree.

The range of supported values depends on the processing unit type and the type of the selected loss function:
  • CPU — Any integer up to  16.

  • GPU — Any integer up to 8 pairwise modes (YetiRank, PairLogitPairwise and QueryCrossEntropy) and up to   16 for all other loss functions.

6

CPU and GPU

-I

--ignore-features

Indices of features to exclude from training. The non-negative indices that do not match any features are successfully ignored. For example, if five features are defined for the objects in the dataset and this parameter is set to “42”, the corresponding non-existing feature is successfully ignored.

The identifier corresponds to the feature's index. Feature indices used in train and feature importance are numbered from 0 to featureCount – 1. If a file is used as input data then any non-feature column types are ignored when calculating these indices. For example, each row in the input file contains data in the following order: categorical feature<\t>label value<\t>numerical feature. So for the row rock<\t>0<\t>42, the identifier for the “rock” feature is 0, and for the “42” feature it's 1.

Supported operators:

  • “:” — Value separator.
  • “-” — Range of values (the left and right edges are included).
For example, if training should exclude features with the identifiers 1, 2, 7, 42, 43, 44, 45, use the following construction:
1:2:7:42-45
None (use all features)

CPU and GPU

--one-hot-max-size

Use one-hot encoding for all features with a number of different values less than or equal to the given parameter value. Ctrs are not calculated for such features.

2

CPU and GPU

--has-time

Use the order of objects in the input data (do not perform random permutations during the Transforming categorical features to numerical features and Choosing the tree structure stages).

The Timestamp column type is used to determine the order of objects if specified in the input data.

False (not used; generates random permutations)

CPU and GPU

--rsm

Random subspace method. The percentage of features to use at each split selection, when features are selected over again at random.

The value must be in the range (0;1].

1

CPU

--fold-permutation-block

Objects in the dataset are grouped in blocks before the random permutations. This parameter defines the size of the blocks. The smaller is the value, the slower is the training. Large values may result in quality degradation.

Default value differs depending on the dataset size and ranges from 1 to 256 inclusively

CPU and GPU

--leaf-estimation-iterations

The number of gradient steps when calculating the values in leaves.

Depends on the training objective

CPU and GPU

--leaf-estimation-method

The method used to calculate the values in leaves.

Possible values:
  • Newton
  • Gradient
Depends on the mode:
  • Regression – One gradient iteration.
  • Classification – 10 Newton iterations.
  • Multiclassification – One Newton iteration.

CPU and GPU

--name

The experiment name to display in visualization tools.experiment

CPU and GPU

--prediction-type

A comma-separated list of prediction types to output during training for the validation dataset. This information is output if a validation dataset is provided.

Supported prediction types:
  • Probability
  • Class
  • RawFormulaVal
RawFormulaVal

CPU

--fold-len-multiplier

Coefficient for changing the length of folds.

The value must be greater than 1. The best validation result is achieved with minimum values.

With values close to 1 (for example, ), each iteration takes a quadratic amount of memory and time for the number of objects in the iteration. Thus, low values are possible only when there is a small number of objects.

2

CPU and GPU

--approx-on-full-history

The principles for calculating the approximated values.

Possible values:
  • “False” — Use only а fraction of the fold for calculating the approximated values. The size of the fraction is calculated as follows: , where X is the specified coefficient for changing the length of folds. This mode is faster and in rare cases slightly less accurate
  • “True” — Use all the preceding rows in the fold for calculating the approximated values. This mode is slower and in rare cases slightly more accurate.
False

CPU

--class-weights

Class weights. The values are used as multipliers for the object weights. This parameter can be used for solving classification and multiclassification problems.

For imbalanced datasets with binary classification, the weight multiplier can be set to 1 for class 0 and to for class 1.

Tip.
  • The quantity of class weights must match the quantity of class names specified in the --class-names parameter and the number of classes specified in the --classes-count parameter.

  • For imbalanced datasets with binary classification the weight multiplier can be set to 1 for class 0 and to for class 1.

Format:
<value for class 1>,..,<values for class N>
For example:
0.85,1.2,1
None (the weight for all classes is set to 1)

CPU and GPU

--boosting-type

Boosting scheme.

Possible values:
  • Ordered — Usually provides better quality on small datasets, but it may be slower than the Plain scheme.
  • Plain — The classic gradient boosting scheme.
Depends on the number of objects in the training dataset and the selected learning mode

CPU and GPU

Only the Plain mode is supported for the MultiClass loss on GPU

--allow-const-label

Use it to train models with datasets that have equal label values for all objects.

False

CPU and GPU

Overfitting detection settings

--od-type

The type of the overfitting detector to use.

Possible values:
  • IncToDec
  • Iter
IncToDec

CPU and GPU

--od-pval

The threshold for the IncToDec overfitting detector type. The training is stopped when the specified value is reached. Requires that a validation dataset was input.

For best results, it is recommended to set a value in the range .

The larger the value, the earlier overfitting is detected.

Restriction.

Do not use this parameter with the Iter overfitting detector type.

0 (the overfitting detection is turned off)

CPU and GPU

--od-wait

The number of iterations to continue the training after the iteration with the optimal metric value.
The purpose of this parameter differs depending on the selected overfitting detector type:
  • IncToDec — Ignore the overfitting detector when the threshold is reached and continue learning for the specified number of iterations after the iteration with the optimal metric value.
  • Iter — Consider the model overfitted and stop training after the specified number of iterations since the iteration with the optimal metric value.
20

CPU and GPU

Binarization settings

-x

--border-count

The number of splits for numerical features. Allowed values are integers from 1 to 255 inclusively.

254 (if training is performed on CPU) or 128 (if training is performed on GPU)

CPU and GPU

--feature-border-type

The binarization mode for numerical features.

Possible values:
  • Median
  • Uniform
  • UniformAndQuantiles
  • MaxLogSum
  • MinEntropy
  • GreedyLogSum
GreedyLogSum

CPU and GPU

--output-borders-file

Save quantization borders for the current dataset to a file.

Refer to the file format description.

The file is not saved

GPU

--input-borders-file

Load custom quantization borders and nanModes from a file (do not generate them).

Borders are automatically generated before training if this parameter is not set.

Refer to the file format description.

The results are not loaded

GPU

Multiclassification settings

--classes-count

The upper limit for the numeric class label. Defines the number of classes for multiclassification.

Only non-negative integers can be specified. The given integer should be greater than any of the label values.

If this parameter is specified and the --class-names is not the labels for all classes in the input dataset should be smaller than the given value.

  • maximum class label + 1 if the --class-names parameter is not specified
  • the quantity of classes names if the --class-names parameter is specified

CPU and GPU

--class-names

Classes names. Allows to redefine the default values when using the MultiClass and Logloss metrics.

If the upper limit for the numeric class label is specified, the number of classes names should match this value.

Attention. The quantity of classes names must match the quantity of classes weights specified in the --class-weights parameter and the number of classes specified in the --classes-count parameter.
Format:
<name for class 1>,..,<name for class N>
For example:
smartphone,touchphone,tablet

The classes names are integers from 0 to classes_count – 1

CPU and GPU

Performance settings

-T

--thread-count

The number of threads to use during training.

  • For CPU

    Optimizes the speed of execution. This parameter doesn't affect results.

  • For GPU

    The given value is used for reading the data from the hard drive and does not affect the training.

    During the training one main thread and one thread for each GPU are used.

The number of processor cores

CPU and GPU

--used-ram-limit

Attempt to limit the amount of used CPU RAM.

Restriction.
  • This option affects only the CTR calculation memory usage.
  • In some cases it is impossible to limit the amount of CPU RAM used in accordance with the specified value.
Format:
<size><measure of information>
Supported measures of information (non case-sensitive):
  • MB
  • KB
  • GB
For example:
2gb
None (memory usage is no limited)

CPU

--gpu-ram-part

How much of the GPU RAM to use for training.

0.95

GPU

--pinned-memory-size

How much pinned (page-locked) CPU RAM to use per GPU.

1073741824

GPU

--gpu-cat-features-storage

The method for storing the categorical features' values.

Possible values:
  • CpuPinnedMemory
  • GpuRam
Tip.

Use the CpuPinnedMemory value if feature combinations are used and the available GPU RAM is not sufficient.

GpuRam

GPU

--data-partition

The method for splitting the input dataset between multiple workers.

Possible values:
  • FeatureParallel — Split the input dataset by features and calculate the value of each of these features on a certain GPU.

    For example:

    • GPU0 is used to calculate the values of features indexed 0, 1, 2
    • GPU1 is used to calculate the values of features indexed 3, 4, 5, etc.
  • DocParallel — Split the input dataset by objects and calculate all features for each of these objects on a certain GPU. It is recommended to use powers of two as the value for optimal performance.

    For example:
    • GPU0 is used to calculate all features for objects indexed object_1, object_2
    • GPU1 is used to calculate all features for objects indexed object_3, object_4, etc.
Depends on the learning mode and the input dataset

GPU

Processing unit settings
--task-type

The processing unit type to use for training.

Possible values:
  • CPU
  • GPU
CPU

CPU and GPU

--devices

IDs of the GPU devices to use for training (indices are zero-based).

Format

  • <unit ID> for one device (for example, 3)
  • <unit ID1>:<unit ID2>:..:<unit IDN> for multiple devices (for example, devices='0:1:3')
  • <unit ID1>-<unit IDN> for a range of devices (for example, devices='0-3')
-1 (use all devices)

GPU

Output settings

--logging-level

The logging level to output to stdout.

Possible values:
  • Silent — Do not output any logging information to stdout.

  • Verbose — Output the following data to stdout:

    • optimized metric
    • elapsed time of training
    • remaining time of training
  • Info — Output additional information and the number of trees.

  • Debug — Output debugging information.

Verbose

CPU and GPU

--metric-period

The frequency of iterations to calculate the values of objectives and metrics. The value should be a positive integer.

The usage of this parameter speeds up the training.

Note.

It is recommended to increase the value of this parameter to maintain training speed if a GPU processing unit type is used.

1

CPU and GPU

--verbose

The frequency of iterations to print the information to stdout. The value of this parameter should be divisible by the value of the frequency of iterations to calculate the values of objectives and metrics.

Restriction. Do not use this parameter with the --logging-level parameter.

1

CPU and GPU

--train-dir

The directory for storing the files generated during training.

Current directory

CPU and GPU

--model-size-reg

The model size regularization coefficient. The larger the value, the smaller the model size.

Possible values are in the range .

Large values reduce the number of feature combinations in the model. Note that the resulting quality of the model can be affected. Set the value to 0 to turn off the model size optimization option.

0.5

CPU

--snapshot-file

Settings for recovering training after an interruption.

Depending on whether the specified file exists in the file system:
  • Missing — Write information about training progress to the specified file.
  • Exists — Load data from the specified file and continue training from where it left off.
File can't be generated or read. If the value is omitted, the file name is experiment.cbsnapshot.

CPU and GPU

-m

--model-file

The name of the resulting files with the model description.

Used for solving other machine learning problems (for instance, applying a model) or defining the names of models in different output formats.

Corresponding file extensions are added to the given value if several output formats are defined in the --model-format parameter.

model.* (model.bin if the model is output in Catboost format only)

CPU and GPU

--model-format

A comma-separated list of output model formats.

Possible values:

  • CatboostBinary.
  • AppleCoreML (only datasets without categorical features are supported).
  • json (multiclassification models are not currently supported). Refer to the CatBoost JSON model tutorial for format details.
  • Python (multiclassification models are not currently supported).See the Using models exported as Python code section for details on applying the resulting model.
  • CPP (multiclassification models are not currently supported). See the Using models exported as C++ code section for details on applying the resulting model.
CatboostBinary

CPU and GPU

--fstr-file

The name of the resulting file that contains regular feature importance data (see Feature importance).

The file is not generated

CPU

--fstr-internal-file

The name of the resulting file that contains internal feature importance data (see Feature importance).

The file is not generated

CPU

--eval-file

The name of the resulting file that contains the model values on the validation datasets.

The format of the output file depends on the problem being solved and the number of input validation datasets.

Save the file to the current directory. The name of the file differs depending on the machine learning problem being solved and the selected metric. The file extensions is eval.

CPU

--json-log

The name of the resulting file that contains metric values and time information.

catboost_training.json

CPU and GPU

--detailed-profileGenerate a file that contains profiler information.The file is not generated

CPU and GPU

--profiler-log

The name of the resulting file that contains profiler information.

catboost_profile.log

CPU and GPU

--learn-err-log

The name of the resulting file that contains the metric value for the training dataset.

learn_error.tsv

CPU and GPU

--test-err-log

The name of the resulting file that contains the metric value for the validation dataset.

test_error.tsv

CPU and GPU

CTR settings
--simple-ctr

Binarization settings for simple categorical features.

Format:

CtrType[:TargetBorderCount=BorderCount][:TargetBorderType=BorderType][:CtrBorderCount=Count][:CtrBorderType=Type][:Prior=num_1/denum_1]..[:Prior=num_N/denum_N]
Components:
  • CtrType — The method for transforming categorical features to numerical features.

    Supported methods for training on CPU:

    • Borders
    • Buckets
    • BinarizedTargetMeanValue
    • Counter

    Supported methods for training on GPU:

    • Borders
    • Buckets
    • FeatureFreq
    • FloatTargetMeanValue
  • TargetBorderCount — The number of borders for label value binarization. Only used for regression problems. Allowed values are integers from 1 to 255 inclusively. The default value is 1.

    This option is available for training on CPU only.

  • TargetBorderType — The binarization type for the label value. Only used for regression problems.

    Possible values:

    • Median
    • Uniform
    • UniformAndQuantiles
    • MaxLogSum
    • MinEntropy
    • GreedyLogSum

    By default, MinEntropy.

    This option is available for training on CPU only.

  • CtrBorderCount — The number of splits for categorical features. Allowed values are integers from 1 to 255 inclusively.
  • CtrBorderType — The binarization type for categorical features.

    Supported values for training on CPU:
    • Uniform

    Supported values for training on GPU:

    • Median
    • Uniform
    • UniformAndQuantiles
    • MaxLogSum
    • MinEntropy
    • GreedyLogSum
  • Prior — Use the specified priors during training (several values can be specified).

    Possible formats:
    • One number — Adds the value to the numerator.
    • Two slash-delimited numbers (for GPU only) — Use this format to set a fraction. The number is added to the numerator and the second is added to the denominator.

CPU and GPU

--combinations-ctr

Binarization settings for combinations of categorical features.

Format:

CtrType[:TargetBorderCount=BorderCount][:TargetBorderType=BorderType][:CtrBorderCount=Count][:CtrBorderType=Type][:Prior=num_1/denum_1]..[:Prior=num_N/denum_N]
Components:
  • CtrType — The method for transforming categorical features to numerical features.

    Supported methods for training on CPU:

    • Borders
    • Buckets
    • BinarizedTargetMeanValue
    • Counter

    Supported methods for training on GPU:

    • Borders
    • Buckets
    • FeatureFreq
    • FloatTargetMeanValue
  • TargetBorderCount — The number of borders for label value binarization. Only used for regression problems. Allowed values are integers from 1 to 255 inclusively. The default value is 1.

    This option is available for training on CPU only.

  • TargetBorderType — The binarization type for the label value. Only used for regression problems.

    Possible values:

    • Median
    • Uniform
    • UniformAndQuantiles
    • MaxLogSum
    • MinEntropy
    • GreedyLogSum

    By default, MinEntropy.

    This option is available for training on CPU only.

  • CtrBorderCount — The number of splits for categorical features. Allowed values are integers from 1 to 255 inclusively.
  • CtrBorderType — The binarization type for categorical features.

    Supported values for training on CPU:
    • Uniform
    Supported values for training on GPU:
    • Uniform
    • Median
  • Prior — Use the specified priors during training (several values can be specified).

    Possible formats:
    • One number — Adds the value to the numerator.
    • Two slash-delimited numbers (for GPU only) — Use this format to set a fraction. The number is added to the numerator and the second is added to the denominator.

CPU and GPU

--per-feature-ctr

Per-feature binarization settings for categorical features.

Format:

FeatureId:CtrType:[:TargetBorderCount=BorderCount][:TargetBorderType=BorderType][:CtrBorderCount=Count][:CtrBorderType=Type][:Prior=num_1/denum_1]..[:Prior=num_1/denum_1]
Components:
  • FeatureId — A zero-based feature identifier.
  • CtrType — The method for transforming categorical features to numerical features.

    Supported methods for training on CPU:

    • Borders
    • Buckets
    • BinarizedTargetMeanValue
    • Counter

    Supported methods for training on GPU:

    • Borders
    • Buckets
    • FeatureFreq
    • FloatTargetMeanValue
  • TargetBorderCount — The number of borders for label value binarization. Only used for regression problems. Allowed values are integers from 1 to 255 inclusively. The default value is 1.

    This option is available for training on CPU only.

  • TargetBorderType — The binarization type for the label value. Only used for regression problems.

    Possible values:

    • Median
    • Uniform
    • UniformAndQuantiles
    • MaxLogSum
    • MinEntropy
    • GreedyLogSum

    By default, MinEntropy.

    This option is available for training on CPU only.

  • CtrBorderCount — The number of splits for categorical features. Allowed values are integers from 1 to 255 inclusively.
  • CtrBorderType — The binarization type for categorical features.

    Supported values for training on CPU:
    • Uniform

    Supported values for training on GPU:

    • Median
    • Uniform
    • UniformAndQuantiles
    • MaxLogSum
    • MinEntropy
    • GreedyLogSum
  • Prior — Use the specified priors during training (several values can be specified).

    Possible formats:
    • One number — Adds the value to the numerator.
    • Two slash-delimited numbers (for GPU only) — Use this format to set a fraction. The number is added to the numerator and the second is added to the denominator.

CPU and GPU

--counter-calc-method

The method for calculating the Counter CTR type.

Possible values:
  • SkipTest — Objects from the validation dataset are not considered at all
  • Full — All objects from both learn and validation datasets are considered
Full

CPU and GPU

--max-ctr-complexity

The maximum number of categorical features that can be combined.

4

CPU and GPU

--ctr-leaf-count-limit

The maximum number of leaves with categorical features. If the quantity exceeds the specified value a part of leaves is discarded.

The leaves to be discarded are selected as follows:

  1. The leaves are sorted by the frequency of the values.
  2. The top N leaves are selected, where N is the value specified in the parameter.
  3. All leaves starting from N+1 are discarded.

This option reduces the resulting model size and the amount of memory required for training. Note that the resulting quality of the model can be affected.

The number of leafs with categorical features is not limited

CPU

--store-all-simple-ctr

Ignore categorical features, which are not used in feature combinations, when choosing candidates for exclusion.

Use this parameter with --ctr_leaf_count_limit only.

Both simple features and feature combinations are taken in account when limiting the number of leafs with categorical features

CPU

--final-ctr-computation-mode

Final CTR computation mode.

Possible values:
  • Default — Compute final CTRs for learn and validation datasets.
  • Skip — Do not compute final CTRs for learn and validation datasets. In this case, the resulting model can not be applied. This mode decreases the size of the resulting model. It can be useful for research purposes when only the metric values have to be calculated.
Default

CPU and GPU

Usage examples

Train a classification model with 100 trees on a comma-separated pool with header:

catboost fit --learn-set train.csv --test-set test.csv --column-description train.cd  --loss-function RMSE --iterations 100 --delimiter=',' --has-header
Train a classification model on GPU:
catboost fit --learn-set ../pytest/data/adult/train_small --column-description ../pytest/data/adult/train.cd --task-type GPU