18893
6691
sklearn.pipeline.FeatureUnion(votingclassifier=sklearn.ensemble.voting_classifier.VotingClassifier(dtc=sklearn.tree.tree.DecisionTreeClassifier,etc=sklearn.tree.tree.ExtraTreeClassifier),functiontransformer=sklearn.preprocessing._function_transformer.FunctionTransformer)
sklearn.FeatureUnion(VotingClassifier,FunctionTransformer)
sklearn.pipeline.FeatureUnion
2
openml==0.12.2,sklearn==0.18.1
Concatenates results of multiple transformer objects.
This estimator applies a list of transformer objects in parallel to the
input data, then concatenates the results. This is useful to combine
several feature extraction mechanisms into a single transformer.
Parameters of the transformers may be set using its name and the parameter
name separated by a '__'. A transformer may be replaced entirely by
setting the parameter with its name to another transformer,
or removed by setting to ``None``.
2021-08-13T19:23:05
English
sklearn==0.18.1
numpy>=1.6.1
scipy>=0.9
n_jobs
1
transformer_list
[{"oml-python:serialized_object": "component_reference", "value": {"key": "votingclassifier", "step_name": "votingclassifier"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "functiontransformer", "step_name": "functiontransformer"}}]
transformer_weights
null
votingclassifier
18894
6691
sklearn.ensemble.voting_classifier.VotingClassifier(dtc=sklearn.tree.tree.DecisionTreeClassifier,etc=sklearn.tree.tree.ExtraTreeClassifier)
sklearn.VotingClassifier
sklearn.ensemble.voting_classifier.VotingClassifier
2
openml==0.12.2,sklearn==0.18.1
Soft Voting/Majority Rule classifier for unfitted estimators.
.. versionadded:: 0.17
2021-08-13T19:23:05
English
sklearn==0.18.1
numpy>=1.6.1
scipy>=0.9
estimators
list of
[{"oml-python:serialized_object": "component_reference", "value": {"key": "dtc", "step_name": "dtc"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "etc", "step_name": "etc"}}]
Invoking the ``fit`` method on the ``VotingClassifier`` will fit clones
of those original estimators that will be stored in the class attribute
`self.estimators_`
n_jobs
int
1
The number of jobs to run in parallel for ``fit``
If -1, then the number of jobs is set to the number of cores.
voting
str
"hard"
If 'hard', uses predicted class labels for majority rule voting
Else if 'soft', predicts the class label based on the argmax of
the sums of the predicted probabilities, which is recommended for
an ensemble of well-calibrated classifiers
weights
array
null
Sequence of weights (`float` or `int`) to weight the occurrences of
predicted class labels (`hard` voting) or class probabilities
before averaging (`soft` voting). Uses uniform weights if `None`
dtc
18895
6691
sklearn.tree.tree.DecisionTreeClassifier
sklearn.DecisionTreeClassifier
sklearn.tree.tree.DecisionTreeClassifier
66
openml==0.12.2,sklearn==0.18.1
A decision tree classifier.
2021-08-13T19:23:05
English
sklearn==0.18.1
numpy>=1.6.1
scipy>=0.9
class_weight
dict
null
Weights associated with classes in the form ``{class_label: weight}``
If not given, all classes are supposed to have weight one. For
multi-output problems, a list of dicts can be provided in the same
order as the columns of y
The "balanced" mode uses the values of y to automatically adjust
weights inversely proportional to class frequencies in the input data
as ``n_samples / (n_classes * np.bincount(y))``
For multi-output, the weights of each column of y will be multiplied
Note that these weights will be multiplied with sample_weight (passed
through the fit method) if sample_weight is specified
criterion
string
"gini"
The function to measure the quality of a split. Supported criteria are
"gini" for the Gini impurity and "entropy" for the information gain
max_depth
int or None
null
The maximum depth of the tree. If None, then nodes are expanded until
all leaves are pure or until all leaves contain less than
min_samples_split samples
max_features
int
null
The number of features to consider when looking for the best split:
- If int, then consider `max_features` features at each split
- If float, then `max_features` is a percentage and
`int(max_features * n_features)` features are considered at each
split
- If "auto", then `max_features=sqrt(n_features)`
- If "sqrt", then `max_features=sqrt(n_features)`
- If "log2", then `max_features=log2(n_features)`
- If None, then `max_features=n_features`
Note: the search for a split does not stop until at least one
valid partition of the node samples is found, even if it requires to
effectively inspect more than ``max_features`` features
max_leaf_nodes
int or None
null
Grow a tree with ``max_leaf_nodes`` in best-first fashion
Best nodes are defined as relative reduction in impurity
If None then unlimited number of leaf nodes
min_impurity_split
float
1e-07
Threshold for early stopping in tree growth. A node will split
if its impurity is above the threshold, otherwise it is a leaf
.. versionadded:: 0.18
min_samples_leaf
int
1
The minimum number of samples required to be at a leaf node:
- If int, then consider `min_samples_leaf` as the minimum number
- If float, then `min_samples_leaf` is a percentage and
`ceil(min_samples_leaf * n_samples)` are the minimum
number of samples for each node
.. versionchanged:: 0.18
Added float values for percentages
min_samples_split
int
2
The minimum number of samples required to split an internal node:
- If int, then consider `min_samples_split` as the minimum number
- If float, then `min_samples_split` is a percentage and
`ceil(min_samples_split * n_samples)` are the minimum
number of samples for each split
.. versionchanged:: 0.18
Added float values for percentages
min_weight_fraction_leaf
float
0.0
The minimum weighted fraction of the sum total of weights (of all
the input samples) required to be at a leaf node. Samples have
equal weight when sample_weight is not provided
presort
bool
false
Whether to presort the data to speed up the finding of best splits in
fitting. For the default settings of a decision tree on large
datasets, setting this to true may slow down the training process
When using either a smaller dataset or a restricted depth, this may
speed up the training.
random_state
int
null
If int, random_state is the seed used by the random number generator;
If RandomState instance, random_state is the random number generator;
If None, the random number generator is the RandomState instance used
by `np.random`
splitter
string
"best"
The strategy used to choose the split at each node. Supported
strategies are "best" to choose the best split and "random" to choose
the best random split
openml-python
python
scikit-learn
sklearn
sklearn_0.18.1
etc
18896
6691
sklearn.tree.tree.ExtraTreeClassifier
sklearn.ExtraTreeClassifier
sklearn.tree.tree.ExtraTreeClassifier
28
openml==0.12.2,sklearn==0.18.1
An extremely randomized tree classifier.
Extra-trees differ from classic decision trees in the way they are built.
When looking for the best split to separate the samples of a node into two
groups, random splits are drawn for each of the `max_features` randomly
selected features and the best split among those is chosen. When
`max_features` is set 1, this amounts to building a totally random
decision tree.
Warning: Extra-trees should only be used within ensemble methods.
2021-08-13T19:23:05
English
sklearn==0.18.1
numpy>=1.6.1
scipy>=0.9
class_weight
null
criterion
"gini"
max_depth
1000
max_features
"auto"
max_leaf_nodes
null
min_impurity_split
1e-07
min_samples_leaf
1
min_samples_split
2
min_weight_fraction_leaf
0.0
random_state
null
splitter
"random"
openml-python
python
scikit-learn
sklearn
sklearn_0.18.1
openml-python
python
scikit-learn
sklearn
sklearn_0.18.1
functiontransformer
18897
6691
sklearn.preprocessing._function_transformer.FunctionTransformer
sklearn.FunctionTransformer
sklearn.preprocessing._function_transformer.FunctionTransformer
5
openml==0.12.2,sklearn==0.18.1
Constructs a transformer from an arbitrary callable.
A FunctionTransformer forwards its X (and optionally y) arguments to a
user-defined function or function object and returns the result of this
function. This is useful for stateless transformations such as taking the
log of frequencies, doing custom scaling, etc.
A FunctionTransformer will not do any checks on its function's output.
Note: If a lambda is used as the function, then the resulting
transformer will not be pickleable.
.. versionadded:: 0.17
2021-08-13T19:23:05
English
sklearn==0.18.1
numpy>=1.6.1
scipy>=0.9
accept_sparse
boolean
false
Indicate that func accepts a sparse matrix as input. If validate is
False, this has no effect. Otherwise, if accept_sparse is false,
sparse matrix inputs will cause an exception to be raised
func
callable
null
The callable to use for the transformation. This will be passed
the same arguments as transform, with args and kwargs forwarded
If func is None, then func will be the identity function
inv_kw_args
dict
null
Dictionary of additional keyword arguments to pass to inverse_func.
inverse_func
callable
null
The callable to use for the inverse transformation. This will be
passed the same arguments as inverse transform, with args and
kwargs forwarded. If inverse_func is None, then inverse_func
will be the identity function
kw_args
dict
null
Dictionary of additional keyword arguments to pass to func
pass_y
bool
false
Indicate that transform should forward the y argument to the
inner callable
validate
bool
true
Indicate that the input X array should be checked before calling
func. If validate is false, there will be no input validation
If it is true, then X will be converted to a 2-dimensional NumPy
array or sparse matrix. If this conversion is not possible or X
contains NaN or infinity, an exception is raised
openml-python
python
scikit-learn
sklearn
sklearn_0.18.1
openml-python
python
scikit-learn
sklearn
sklearn_0.18.1