DefaultParamsReader#

class pyspark.ml.util.DefaultParamsReader(cls)[source]#

Specialization of MLReader for Params types

Default MLReader implementation for transformers and estimators that contain basic (json-serializable) params and no data. This will not handle more complex params or types with data (e.g., models with coefficients).

New in version 2.3.0.

Methods

getAndSetParams(instance, metadata[, skipParams])

Extract Params from metadata, and set them in the instance.

isPythonParamsInstance(metadata)

load(path)

Load the ML instance from the input path.

loadMetadata(path, sc[, expectedClassName])

Load metadata saved using DefaultParamsWriter.saveMetadata()

loadParamsInstance(path, sc)

Load a Params instance from the given path, and return it.

session(sparkSession)

Sets the Spark Session to use for saving/loading.

Attributes

sc

Returns the underlying SparkContext.

sparkSession

Returns the user-specified Spark Session or the default.

Methods Documentation

static getAndSetParams(instance, metadata, skipParams=None)[source]#

Extract Params from metadata, and set them in the instance.

static isPythonParamsInstance(metadata)[source]#
load(path)[source]#

Load the ML instance from the input path.

static loadMetadata(path, sc, expectedClassName='')[source]#

Load metadata saved using DefaultParamsWriter.saveMetadata()

Parameters
pathstr
scpyspark.SparkContext
expectedClassNamestr, optional

If non empty, this is checked against the loaded metadata.

static loadParamsInstance(path, sc)[source]#

Load a Params instance from the given path, and return it. This assumes the instance inherits from MLReadable.

session(sparkSession)#

Sets the Spark Session to use for saving/loading.

Attributes Documentation

sc#

Returns the underlying SparkContext.

sparkSession#

Returns the user-specified Spark Session or the default.