Skip to content

taipy.config.Config

Configuration singleton.

check() classmethod

Check configuration.

This method logs issue messages and returns an issue collector.

Returns:

Type Description
IssueCollector

IssueCollector^: Collector containing the info, warning and error issues.

configure_csv_data_node(id, default_path=None, has_header=True, exposed_type='pandas', scope=Scope.SCENARIO, cacheable=False, **properties) staticmethod

Configure a new CSV data node configuration.

Parameters:

Name Type Description Default
id str

The unique identifier of the new CSV data node configuration.

required
default_path str

The default path of the CSV file.

None
has_header bool

If True, indicates that the CSV file has a header.

True
exposed_type

The exposed type of the data read from CSV file. The default value is pandas.

'pandas'
scope Scope

The scope of the CSV data node configuration. The default value is Scope.SCENARIO.

Scope.SCENARIO
cacheable bool

If True, indicates that the CSV data node is cacheable. The default value is False.

False
**properties Dict[str, Any]

A keyworded variable length list of additional arguments.

{}

Returns:

Type Description

DataNodeConfig^: The new CSV data node configuration.

configure_data_node(id, storage_type='pickle', scope=Scope.SCENARIO, cacheable=False, **properties) staticmethod

Configure a new data node configuration.

Parameters:

Name Type Description Default
id str

The unique identifier of the new data node configuration.

required
storage_type str

The data node configuration storage type. The possible values are "pickle" (which the default value, unless it has been overloaded by the storage_type value set in the default data node configuration (see configure_default_data_node())), "csv", "excel", "sql_table", "sql", "json", "in_memory", or "generic".

'pickle'
scope Scope

The scope of the data node configuration. The default value is Scope.SCENARIO (or the one specified in configure_default_data_node()).

Scope.SCENARIO
cacheable bool

If True, indicates that the data node is cacheable. The default value is False.

False
**properties Dict[str, Any]

A keyworded variable length list of additional arguments.

{}

Returns:

Type Description

DataNodeConfig^: The new data node configuration.

configure_default_data_node(storage_type, scope=Scope.SCENARIO, cacheable=False, **properties) staticmethod

Configure the default values for data node configurations. This function creates the default data node configuration object, where all data node configuration objects will find their default values when needed.

Parameters:

Name Type Description Default
storage_type str

The default storage type for all data node configurations. The possible values are "pickle" (the default value), "csv", "excel", "sql", "in_memory", "json" or "generic".

required
scope Scope

The default scope for all data node configurations. The default value is Scope.SCENARIO.

Scope.SCENARIO
cacheable bool

If True, indicates that the data node is cacheable. The default value is False.

False
**properties Dict[str, Any]

A keyworded variable length list of additional arguments.

{}

Returns:

Type Description

DataNodeConfig^: The default data node configuration.

configure_default_pipeline(task_configs, **properties) staticmethod

Configure the default values for pipeline configurations.

This function creates the default pipeline configuration object, where all pipeline configuration objects will find their default values when needed.

Parameters:

Name Type Description Default
task_configs Union[TaskConfig, List[TaskConfig]]

The list of the task configurations that make the default pipeline configuration. This can be a single task configuration object is this pipeline holds a single task.

required
**properties Dict[str, Any]

A keyworded variable length list of additional arguments.

{}

Returns:

Type Description

PipelineConfig^: The default pipeline configuration.

configure_default_scenario(pipeline_configs, frequency=None, comparators=None, **properties) staticmethod

Configure the default values for scenario configurations.

This function creates the default scenario configuration object, where all scenario configuration objects will find their default values when needed.

Parameters:

Name Type Description Default
pipeline_configs List[PipelineConfig]

The list of pipeline configurations used by this scenario configuration.

required
frequency Optional[Frequency]

The scenario frequency. It corresponds to the recurrence of the scenarios instantiated from this configuration. Based on this frequency each scenario will be attached to the relevant cycle.

None
comparators Optional[Dict[str, Union[List[Callable], Callable]]]

The list of functions used to compare scenarios. A comparator function is attached to a scenario's data node configuration. The key of the dictionary parameter corresponds to the data node configuration id. During the scenarios' comparison, each comparator is applied to all the data nodes instantiated from the data node configuration attached to the comparator. See compare_scenarios() more more details.

None
**properties Dict[str, Any]

A keyworded variable length list of additional arguments.

{}

Returns:

Type Description

ScenarioConfig^: The default scenario configuration.

configure_default_task(function, input=None, output=None, **properties) staticmethod

Configure the default values for task configurations.

This function creates the default task configuration object, where all task configuration objects will find their default values when needed.

Parameters:

Name Type Description Default
function Callable

The python function called by Taipy to run the task.

required
input Optional[Union[DataNodeConfig, List[DataNodeConfig]]]

The list of the input data node configurations. This can be a unique data node configuration if there is a single input data node, or None if there are none.

None
output Optional[Union[DataNodeConfig, List[DataNodeConfig]]]

The list of the output data node configurations. This can be a unique data node configuration if there is a single output data node, or None if there are none.

None
**properties Dict[str, Any]

A keyworded variable length list of additional arguments.

{}

Returns:

Type Description

TaskConfig^: The default task configuration.

configure_excel_data_node(id, default_path=None, has_header=True, sheet_name=None, exposed_type='pandas', scope=Scope.SCENARIO, cacheable=False, **properties) staticmethod

Configure a new Excel data node configuration.

Parameters:

Name Type Description Default
id str

The unique identifier of the new Excel data node configuration.

required
default_path str

The path of the Excel file.

None
has_header bool

If True, indicates that the Excel file has a header.

True
sheet_name Union[List[str], str]

The list of sheet names to be used. This can be a unique name.

None
exposed_type

The exposed type of the data read from Excel file. The default value is pandas.

'pandas'
scope Scope

The scope of the Excel data node configuration. The default value is Scope.SCENARIO.

Scope.SCENARIO
cacheable bool

If True, indicates that the Excel data node is cacheable. The default value is False.

False
**properties Dict[str, Any]

A keyworded variable length list of additional arguments.

{}

Returns:

Type Description

DataNodeConfig^: The new CSV data node configuration.

configure_generic_data_node(id, read_fct=None, write_fct=None, read_fct_params=None, write_fct_params=None, scope=Scope.SCENARIO, cacheable=False, **properties) staticmethod

Configure a new generic data node configuration.

Parameters:

Name Type Description Default
id str

The unique identifier of the new generic data node configuration.

required
read_fct Optional[Callable]

The Python function called to read the data.

None
write_fct Optional[Callable]

The Python function called to write the data. The provided function must have at least one parameter that receives the data to be written.

None
read_fct_params Optional[List]

The parameters that are passed to read_fct to read the data.

None
write_fct_params Optional[List]

The parameters that are passed to write_fct to write the data.

None
scope Optional[Scope]

The scope of the Generic data node configuration. The default value is Scope.SCENARIO.

Scope.SCENARIO
cacheable bool

If True, indicates that the generic data node is cacheable. The default value is False.

False
**properties Dict[str, Any]

A keyworded variable length list of additional arguments.

{}

Returns:

Type Description

DataNodeConfig^: The new Generic data node configuration.

configure_global_app(root_folder=None, storage_folder=None, clean_entities_enabled=None, **properties) classmethod

Configure the global application.

Parameters:

Name Type Description Default
root_folder Optional[str]

The path of the base folder for the Taipy application.

None
storage_folder Optional[str]

The folder name used to store Taipy data. It is used in conjunction with the root_folder field: the storage path is "".

None
clean_entities_enabled Optional[str]

The field to activate or deactivate the 'clean entities' feature. The default value is False.

None

Returns:

Type Description
GlobalAppConfig

GlobalAppConfig^: The global application configuration.

configure_in_memory_data_node(id, default_data=None, scope=Scope.SCENARIO, cacheable=False, **properties) staticmethod

Configure a new in_memory data node configuration.

Parameters:

Name Type Description Default
id str

The unique identifier of the new in_memory data node configuration.

required
default_data Optional[Any]

The default data of the data nodes instantiated from this in_memory data node configuration.

None
scope Scope

The scope of the in_memory data node configuration. The default value is Scope.SCENARIO.

Scope.SCENARIO
cacheable bool

If True, indicates that the in_memory data node is cacheable. The default value is False.

False
**properties Dict[str, Any]

A keyworded variable length list of additional arguments.

{}

Returns:

Type Description

DataNodeConfig^: The new in_memory data node configuration.

configure_job_executions(mode=None, nb_of_workers=None, max_nb_of_workers=None, **properties) staticmethod

Configure job execution.

Parameters:

Name Type Description Default
mode Optional[str]

The job execution mode. Possible values are: "standalone" (the default value) or "development".

None
max_nb_of_workers Optional[int, str]

Parameter used only in default "standalone" mode. The maximum number of jobs able to run in parallel. The default value is 1.
A string can be provided to dynamically set the value using an environment variable. The string must follow the pattern: ENV[<env_var>] where <env_var> is the name of environment variable.

None
nb_of_workers Optional[int, str]

Deprecated. Use max_nb_of_workers instead.

None

Returns:

Type Description

JobConfig^: The job execution configuration.

configure_json_data_node(id, default_path=None, encoder=None, decoder=None, scope=Scope.SCENARIO, cacheable=False, **properties) staticmethod

Configure a new JSON data node configuration.

Parameters:

Name Type Description Default
id str

The unique identifier of the new JSON data node configuration.

required
default_path str

The default path of the JSON file.

None
encoder json.JSONEncoder

The JSON encoder used to write data into the JSON file.

None
decoder json.JSONDecoder

The JSON decoder used to read data from the JSON file.

None
scope Scope

The scope of the JSON data node configuration. The default value is Scope.SCENARIO.

Scope.SCENARIO
cacheable bool

If True, indicates that the JSON data node is cacheable. The default value is False.

False
**properties Dict[str, Any]

A keyworded variable length list of additional arguments.

{}

Returns:

Type Description

DataNodeConfig^: The new JSON data node configuration.

configure_pickle_data_node(id, default_data=None, scope=Scope.SCENARIO, cacheable=False, **properties) staticmethod

Configure a new pickle data node configuration.

Parameters:

Name Type Description Default
id str

The unique identifier of the new pickle data node configuration.

required
default_data Optional[Any]

The default data of the data nodes instantiated from this pickle data node configuration.

None
scope Scope

The scope of the pickle data node configuration. The default value is Scope.SCENARIO.

Scope.SCENARIO
cacheable bool

If True, indicates that the pickle data node is cacheable. The default value is False.

False
**properties Dict[str, Any]

A keyworded variable length list of additional arguments.

{}

Returns:

Type Description

DataNodeConfig^: The new pickle data node configuration.

configure_pipeline(id, task_configs, **properties) staticmethod

Configure a new pipeline configuration.

Parameters:

Name Type Description Default
id str

The unique identifier of the new pipeline configuration.

required
task_configs Union[TaskConfig, List[TaskConfig]]

The list of the task configurations that make this new pipeline. This can be a single task configuration object is this pipeline holds a single task.

required
**properties Dict[str, Any]

A keyworded variable length list of additional arguments.

{}

Returns:

Type Description

PipelineConfig^: The new pipeline configuration.

configure_scenario(id, pipeline_configs, frequency=None, comparators=None, **properties) staticmethod

Configure a new scenario configuration.

Parameters:

Name Type Description Default
id str

The unique identifier of the new scenario configuration.

required
pipeline_configs List[PipelineConfig]

The list of pipeline configurations used by this new scenario configuration.

required
frequency Optional[Frequency]

The scenario frequency. It corresponds to the recurrence of the scenarios instantiated from this configuration. Based on this frequency each scenario will be attached to the relevant cycle.

None
comparators Optional[Dict[str, Union[List[Callable], Callable]]]

The list of functions used to compare scenarios. A comparator function is attached to a scenario's data node configuration. The key of the dictionary parameter corresponds to the data node configuration id. During the scenarios' comparison, each comparator is applied to all the data nodes instantiated from the data node configuration attached to the comparator. See compare_scenarios() more more details.

None
**properties Dict[str, Any]

A keyworded variable length list of additional arguments.

{}

Returns:

Type Description

ScenarioConfig^: The new scenario configuration.

configure_scenario_from_tasks(id, task_configs, frequency=None, comparators=None, pipeline_id=None, **properties) staticmethod

Configure a new scenario configuration made of a single new pipeline configuration.

A new pipeline configuration is created as well. If pipeline_id is not provided, the new pipeline configuration identifier is set to the scenario configuration identifier post-fixed by '_pipeline'.

Parameters:

Name Type Description Default
id str

The unique identifier of the scenario configuration.

required
task_configs List[TaskConfig]

The list of task configurations used by the new pipeline configuration that is created.

required
frequency Optional[Frequency]

The scenario frequency. It corresponds to the recurrence of the scenarios instantiated from this configuration. Based on this frequency each scenario will be attached to the relevant cycle.

None
comparators Optional[Dict[str, Union[List[Callable], Callable]]]

The list of functions used to compare scenarios. A comparator function is attached to a scenario's data node configuration. The key of the dictionary parameter corresponds to the data node configuration id. During the scenarios' comparison, each comparator is applied to all the data nodes instantiated from the data node configuration attached to the comparator. See (taipy.)compare_scenarios() more more details.

None
pipeline_id str

The identifier of the new pipeline configuration to be configured.

None
**properties Dict[str, Any]

A keyworded variable length list of additional arguments.

{}

Returns:

Type Description

ScenarioConfig^: The new scenario configuration.

configure_sql_data_node(id, db_username, db_password, db_name, db_engine, db_port=1433, db_host='localhost', db_driver='ODBC Driver 17 for SQL Server', db_extra_args=None, read_query=None, write_query_builder=None, exposed_type='pandas', scope=Scope.SCENARIO, cacheable=False, **properties) staticmethod

Configure a new SQL data node configuration.

Parameters:

Name Type Description Default
id str

The unique identifier of the new SQL data node configuration.

required
db_username str

The database username.

required
db_password str

The database password.

required
db_name str

The database name.

required
db_engine str

The database engine. Possible values are "sqlite" or "mssql".

required
db_port int

The database port. The default value is 1433.

1433
db_host str

The database host. The default value is "localhost".

'localhost'
db_driver str

The database driver. The default value is "ODBC Driver 17 for SQL Server".

'ODBC Driver 17 for SQL Server'
db_extra_args Dict[str, Any]

A dictionary of additional arguments to be passed into database connection string.

None
read_query str

The SQL query string used to read the data from the database.

None
write_query_builder Callable

A callback function that takes the data as an input parameter and returns a list of SQL queries.

None
exposed_type

The exposed type of the data read from SQL query. The default value is pandas.

'pandas'
scope Scope

The scope of the SQL data node configuration. The default value is Scope.SCENARIO.

Scope.SCENARIO
cacheable bool

If True, indicates that the SQL data node is cacheable. The default value is False.

False
**properties Dict[str, Any]

A keyworded variable length list of additional arguments.

{}

Returns:

Type Description

DataNodeConfig^: The new SQL data node configuration.

configure_sql_table_data_node(id, db_username, db_password, db_name, db_engine, table_name=None, db_port=1433, db_host='localhost', db_driver='ODBC Driver 17 for SQL Server', db_extra_args=None, exposed_type='pandas', scope=Scope.SCENARIO, cacheable=False, **properties) staticmethod

Configure a new SQL table data node configuration.

Parameters:

Name Type Description Default
id str

The unique identifier of the new SQL data node configuration.

required
db_username str

The database username.

required
db_password str

The database password.

required
db_name str

The database name.

required
db_host str

The database host. The default value is "localhost".

'localhost'
db_engine str

The database engine. Possible values are "sqlite" or "mssql".

required
db_driver str

The database driver. The default value is "ODBC Driver 17 for SQL Server".

'ODBC Driver 17 for SQL Server'
db_port int

The database port. The default value is 1433.

1433
db_extra_args Dict[str, Any]

A dictionary of additional arguments to be passed into database connection string.

None
table_name str

The name of the SQL table.

None
exposed_type

The exposed type of the data read from SQL query. The default value is pandas.

'pandas'
scope Scope

The scope of the SQL data node configuration. The default value is Scope.SCENARIO.

Scope.SCENARIO
cacheable bool

If True, indicates that the SQL table data node is cacheable. The default value is False.

False
**properties Dict[str, Any]

A keyworded variable length list of additional arguments.

{}

Returns:

Type Description

DataNodeConfig^: The new SQL data node configuration.

configure_task(id, function, input=None, output=None, **properties) staticmethod

Configure a new task configuration.

Parameters:

Name Type Description Default
id str

The unique identifier of this task configuration.

required
function Callable

The python function called by Taipy to run the task.

required
input Optional[Union[DataNodeConfig, List[DataNodeConfig]]]

The list of the function input data node configurations. This can be a unique data node configuration if there is a single input data node, or None if there are none.

None
output Optional[Union[DataNodeConfig, List[DataNodeConfig]]]

The list of the function output data node configurations. This can be a unique data node configuration if there is a single output data node, or None if there are none.

None
**properties Dict[str, Any]

A keyworded variable length list of additional arguments.

{}

Returns:

Type Description

TaskConfig^: The new task configuration.

export(filename) classmethod

Export a configuration.

The export is done in a toml file.

The exported configuration is a compilation from the three possible methods to configure the application: the python code configuration, the file configuration and the environment configuration.

Parameters:

Name Type Description Default
filename Union[str, Path]

The path of the file to export.

required
Note

If filename already exists, it is overwritten.

global_config()

Return configuration values related to the global application as a GlobalAppConfig.

load(filename) classmethod

Load a configuration file.

Parameters:

Name Type Description Default
filename Union[str, Path]

The path of the toml configuration file to load.

required

sections()

Return all non unique sections.

unique_sections()

Return all unique sections.