Pipeline management
The Entities' creation section provided documentation on Pipeline
creation. Now that we know
how to create a new Pipeline
, this section focuses on describing the pipeline's attributes and utility methods for
using pipelines.
In the following, it is assumed that my_config.py
module contains a Taipy configuration
already implemented.
Pipeline attributes¶
The pipeline creation method returns a Pipeline
entity. It is identified by a unique identifier id
that
is generated by Taipy. A pipeline also holds various properties that are accessible as an attribute of the pipeline:
- config_id: The id of the pipeline configuration.
- subscribers: The list of Tuple(callback, params) representing the subscribers.
- properties: The complete dictionary of the pipeline properties. It includes a copy of the properties of the pipeline configuration, in addition to the properties provided at the creation and at runtime.
- tasks: The dictionary holding the various tasks of the pipeline. The key corresponds to the config_id of the task while the value is the task itself.
- parent_id: The identifier of the parent, which can be a pipeline, scenario, cycle or None.
- Each property of the properties dictionary is also directly exposed as an attribute.
- Each nested entity is also exposed as an attribute of the pipeline. the attribute name corresponds to the config_id of the nested entity.
Example
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
|
Get pipeline by id¶
The method to get a pipeline is from its id by using the get()
method :
1 2 3 4 5 6 |
|
Here the two variables pipeline
and pipeline_retrieved
are equal.
Get pipeline by config id¶
A pipeline can also be retrieved from a scenario by accessing the pipeline's config_id of the scenario.
1 2 3 4 5 6 7 8 |
|
Get all pipelines¶
All the pipelines can be retrieved using the method get_pipelines()
. This method returns the list of all
existing pipelines.
Delete a pipeline¶
A pipeline can be deleted by using delete()
which takes the pipeline id as a parameter. The deletion is
also propagated to the nested tasks, data nodes, and jobs if they are not shared with any other pipeline.