Skip to content

Config class

Singleton class that manages the configuration of a Taipy application.

The Config singleton is the main class to use for configuring a Taipy application. In particular, this class provides:

  1. Various methods to configure the application's behavior

    The Config class provides various methods to configure the application. Each method adds a specific section to the configuration and returns it.

    Most frequently used configuration methods
    from taipy import Config
    
    def by_two(x: int):
        return x * 2
    
    input_cfg = Config.configure_data_node("my_input")
    result_cfg = Config.configure_data_node("my_result")
    task_cfg = Config.configure_task("my_double", function=by_two, input=input_cfg, output=result_cfg)
    scenario_cfg = Config.configure_scenario("my_scenario", task_configs=[task_cfg])
    
    Config.load("config.toml") # Load a configuration file
    
    Advanced use case

    The configuration can be done in three ways: Python code, configuration files, or environment variables. All configuration manners are ultimately merged (overriding the previous way) to create a final applied configuration. Please refer to the advanced configuration section from the user manual for more details.

  2. Attributes and methods to retrieve the configuration values.

    Once the configuration is done, you can retrieve the configuration values using the exposed attributes.

    Retrieve configuration values
    from taipy import Config
    
    global_cfg = Config.global_config  # Retrieve the global application configuration
    data_node_cfgs = Config.data_nodes  # Retrieve all data node configurations
    scenario_cfgs = Config.scenarios  # Retrieve all scenario configurations
    
  3. A few methods to manage the configuration:

    The Config class also provides a few methods to manage the configuration.

    Manage the configuration
    • Check the configuration for issues: Use the Config.check() method to check the configuration. It returns an IssueCollector containing all the Issues found. The issues are logged to the console for debugging.
    • Block the configuration update: Use the Config.block_update() method to forbid any update on the configuration. This can be useful when you want to ensure that the configuration is not modified at run time. Note that running the Orchestrator service` automatically blocks the configuration update.
    • Unblock the configuration update: Use the Config.unblock_update() method to allow again the update on the configuration.
    • Backup the configuration: Use the Config.backup() method to back up as a TOML file the applied configuration. The applied configuration backed up is the result of the compilation of the three possible configuration methods that overrides each others.
    • Restore the configuration: Use the Config.restore() method to restore a TOML configuration file and replace the current applied configuration.
    • Export the configuration: Use the Config.export() method to export as a TOML file the Python code configuration.
    • Load the configuration: Use the Config.load() method to load a TOML configuration file and replace the current Python configuration.
    • Override the configuration: Use the Config.override() method to load a TOML configuration file and override the current Python configuration.

Attributes

authentication_config property

authentication_config: AuthenticationConfig

The configured AuthenticationConfig sections .

core property

core: CoreSection

The configured CoreSection section.

data_nodes property

data_nodes: DataNodeConfig

The configured DataNodeConfig sections .

global_config property

global_config: GlobalAppConfig

configuration values related to the global application as a GlobalAppConfig.

gui_config property

gui_config: _GuiSection

The configured _GuiSection section.

job_config property

job_config: JobConfig

The configured JobConfig section.

migration_functions property

migration_functions: MigrationConfig

The configured MigrationConfig section.

scenarios property

scenarios: ScenarioConfig

The configured ScenarioConfig sections .

sections property

sections: Dict[str, Dict[str, Section]]

A dictionary containing all non-unique sections.

tasks property

tasks: TaskConfig

The configured TaskConfig sections .

telemetry property

telemetry: TelemetrySection

The configured TelemetrySection section.

unique_sections property

unique_sections: Dict[str, UniqueSection]

A dictionary containing all unique sections.

Methods

add_migration_function() staticmethod

add_migration_function(
    target_version: str,
    config: Union[Section, str],
    migration_fct: Callable,
    **properties
) -> MigrationConfig

Add a migration function for a Configuration to migrate entities to the target version.

Parameters:

Name Type Description Default
target_version str

The production version that entities are migrated to.

required
config Union[Section, str]

The configuration or the id of the config that needs to migrate.

required
migration_fct Callable

Migration function that takes an entity as input and returns a new entity that is compatible with the target production version.

required
**properties Dict[str, Any]

A keyworded variable length list of additional arguments.

{}

Returns:

Type Description
MigrationConfig

The Migration configuration.

backup() classmethod

backup(filename: str) -> None

Backup a configuration.

The backup is done in a toml file.

The backed up configuration is a compilation from the three possible methods to configure the application: the Python code configuration, the file configuration and the environment configuration.

Note

If filename already exists, it is overwritten.

Parameters:

Name Type Description Default
filename Union[str, Path]

The path of the file to export.

required

block_update() classmethod

block_update() -> None

Block update on the configuration signgleton.

check() classmethod

check() -> IssueCollector

Check configuration.

This method logs issue messages and returns an issue collector.

Returns:

Type Description
IssueCollector

Collector containing the info, warning and error issues.

Raises:

Type Description
SystemExit

If configuration errors are found, the application exits with an error message.

configure_authentication() staticmethod

configure_authentication(
    protocol: Optional[str] = None,
    secret_key: Optional[str] = None,
    auth_session_duration: int = 3600,
    id: Optional[str] = None,
    **properties
) -> AuthenticationConfig

Configure authentication.

Parameters:

Name Type Description Default
id Optional[str]

Unique identifier of the authentication config. It must be a valid Python variable name.

None
protocol Optional[str]

The name of the protocol to configure ("ldap", "entra_id", "taipy" or "none").

None
secret_key Optional[str]

A secret string used to internally encrypt the credentials' information. If no value is provided, the first run-time authentication sets the default value to a random text string.

None
auth_session_duration Optional[int]

How long, in seconds, are credentials valid after their creation. The default value is 3600, corresponding to an hour.

3600
**properties Dict[str, Any]

A keyworded variable length list of additional arguments.
Depending on the protocol, these arguments are:

  • "LDAP" protocol: the following arguments are accepted:
    • server: the URL of the LDAP server this authenticator connects to.
    • base_dn: the LDAP distinguished name that is used.
  • "Entra ID" protocol: the following arguments are accepted:
    • client_id: the client ID of the Entra ID application. The application must be registered in the Azure Entra ID portal and have the required permissions including the "User.Read" and "Team.ReadBasic.All" permissions.
    • tenant_id: the tenant ID of the Entra ID organization.
  • "Taipy" protocol: the following arguments are accepted:
    • roles: a dictionary that configures the association of usernames to roles.
    • passwords: if required, a dictionary that configures the association of usernames to hashed passwords. A user can be authenticated if it appears at least in one of the roles or the password dictionaries.
      If it only appears in roles, then the user is authenticated if provided a password exactly identical to its username.
      If it only appears in passwords, then the user is assigned no roles.
  • "None": No additional arguments are required.
{}

Returns:

Type Description
AuthenticationConfig

The authentication configuration.

configure_core() staticmethod

configure_core(
    root_folder: Optional[str] = None,
    storage_folder: Optional[str] = None,
    taipy_storage_folder: Optional[str] = None,
    repository_type: Optional[str] = None,
    repository_properties: Optional[
        Dict[str, Union[str, int]]
    ] = None,
    read_entity_retry: Optional[int] = None,
    mode: Optional[str] = None,
    version_number: Optional[str] = None,
    force: Optional[bool] = None,
    **properties
) -> CoreSection

Configure the Orchestrator service.

Parameters:

Name Type Description Default
root_folder Optional[str]

Path of the base folder for the taipy application. The default value is "./taipy/"

None
storage_folder str

Folder name used to store user data. The default value is "user_data/". It is used in conjunction with the root_folder attribute. That means the storage path is (The default path is "./taipy/user_data/").

None
taipy_storage_folder str

Folder name used to store Taipy data. The default value is ".taipy/". It is used in conjunction with the root_folder attribute. That means the storage path is (The default path is "./taipy/.taipy/").

None
repository_type Optional[str]

The type of the repository to be used to store Taipy data. The default value is "filesystem".

None
repository_properties Optional[Dict[str, Union[str, int]]]

A dictionary of additional properties to be used by the repository.

None
read_entity_retry Optional[int]

Number of retries to read an entity from the repository before return failure. The default value is 3.

None
mode Optional[str]

Indicates the mode of the version management system. Possible values are "development" or "experiment". On Enterprise edition of Taipy, production mode is also available. Please refer to the Versioning management documentation page for more details.

None
version_number Optional[str]

The string identifier of the version. In development mode, the version number is ignored.

None
force Optional[bool]

If True, Taipy will override a version even if the configuration has changed and run the application.

None
**properties Dict[str, Any]

A keyworded variable length list of additional arguments configure the behavior of the Orchestrator service.

{}

Returns:

Type Description
CoreSection

The Core configuration.

configure_csv_data_node() staticmethod

configure_csv_data_node(
    id: str,
    default_path: Optional[str] = None,
    encoding: Optional[str] = None,
    has_header: Optional[bool] = None,
    exposed_type: Optional[str] = None,
    scope: Optional[Scope] = None,
    validity_period: Optional[timedelta] = None,
    **properties
) -> DataNodeConfig

Configure a new CSV data node configuration.

Parameters:

Name Type Description Default
id str

The unique identifier of the new CSV data node configuration.

required
default_path Optional[str]

The default path of the CSV file.

None
encoding Optional[str]

The encoding of the CSV file.

None
has_header Optional[bool]

If True, indicates that the CSV file has a header.

None
exposed_type Optional[str]

The exposed type of the data read from CSV file.
The default value is pandas.

None
scope Optional[Scope]

The scope of the CSV data node configuration.
The default value is Scope.SCENARIO.

None
validity_period Optional[timedelta]

The duration since the last edit date for which the data node can be considered up-to-date. Once the validity period has passed, the data node is considered stale and relevant tasks will run even if they are skippable (see the Task configuration page for more details). If validity_period is set to None, the data node is always up-to-date.

None
**properties dict[str, any]

A keyworded variable length list of additional arguments.

{}

Returns:

Type Description
DataNodeConfig

The new CSV data node configuration.

configure_data_node() staticmethod

configure_data_node(
    id: str,
    storage_type: Optional[str] = None,
    scope: Optional[Scope] = None,
    validity_period: Optional[timedelta] = None,
    **properties
) -> DataNodeConfig

Configure a new data node configuration.

Parameters:

Name Type Description Default
id str

The unique identifier of the new data node configuration.

required
storage_type Optional[str]

The data node configuration storage type. The possible values are None (which is the default value of "pickle", unless it has been overloaded by the storage_type value set in the default data node configuration (see set_default_data_node_configuration())), "pickle", "csv", "excel", "sql_table", "sql", "json", "parquet", "mongo_collection", "in_memory", or "generic".

None
scope Optional[Scope]

The scope of the data node configuration.
The default value is Scope.SCENARIO (or the one specified in set_default_data_node_configuration()).

None
validity_period Optional[timedelta]

The duration since the last edit date for which the data node can be considered up-to-date. Once the validity period has passed, the data node is considered stale and relevant tasks will run even if they are skippable (see the Task configuration page for more details). If validity_period is set to None, the data node is always up-to-date.

None
**properties dict[str, any]

A keyworded variable length list of additional arguments.

{}

Returns:

Type Description
DataNodeConfig

The new data node configuration.

configure_data_node_from() staticmethod

configure_data_node_from(
    source_configuration: DataNodeConfig,
    id: str,
    **properties
) -> DataNodeConfig

Configure a new data node configuration from an existing one.

Parameters:

Name Type Description Default
source_configuration DataNodeConfig

The source data node configuration.

required
id str

The unique identifier of the new data node configuration.

required
**properties dict[str, any]

A keyworded variable length list of additional arguments.
The default properties are the properties of the source data node configuration.

{}

Returns:

Type Description
DataNodeConfig

The new data node configuration.

configure_excel_data_node() staticmethod

configure_excel_data_node(
    id: str,
    default_path: Optional[str] = None,
    has_header: Optional[bool] = None,
    sheet_name: Optional[Union[List[str], str]] = None,
    exposed_type: Optional[str] = None,
    scope: Optional[Scope] = None,
    validity_period: Optional[timedelta] = None,
    **properties
) -> DataNodeConfig

Configure a new Excel data node configuration.

Parameters:

Name Type Description Default
id str

The unique identifier of the new Excel data node configuration.

required
default_path Optional[str]

The path of the Excel file.

None
has_header Optional[bool]

If True, indicates that the Excel file has a header.

None
sheet_name Optional[Union[List[str], str]]

The list of sheet names to be used. This can be a unique name.

None
exposed_type Optional[str]

The exposed type of the data read from Excel file.
The default value is pandas.

None
scope Optional[Scope]

The scope of the Excel data node configuration.
The default value is Scope.SCENARIO.

None
validity_period Optional[timedelta]

The duration since the last edit date for which the data node can be considered up-to-date. Once the validity period has passed, the data node is considered stale and relevant tasks will run even if they are skippable (see the Task configuration page for more details). If validity_period is set to None, the data node is always up-to-date.

None
**properties dict[str, any]

A keyworded variable length list of additional arguments.

{}

Returns:

Type Description
DataNodeConfig

The new Excel data node configuration.

configure_generic_data_node() staticmethod

configure_generic_data_node(
    id: str,
    read_fct: Optional[Callable] = None,
    write_fct: Optional[Callable] = None,
    read_fct_args: Optional[List] = None,
    write_fct_args: Optional[List] = None,
    scope: Optional[Scope] = None,
    validity_period: Optional[timedelta] = None,
    **properties
) -> DataNodeConfig

Configure a new generic data node configuration.

Parameters:

Name Type Description Default
id str

The unique identifier of the new generic data node configuration.

required
read_fct Optional[Callable]

The Python function called to read the data.

None
write_fct Optional[Callable]

The Python function called to write the data. The provided function must have at least one parameter that receives the data to be written.

None
read_fct_args Optional[List]

The list of arguments that are passed to the function read_fct to read data.

None
write_fct_args Optional[List]

The list of arguments that are passed to the function write_fct to write the data.

None
scope Optional[Scope]

The scope of the Generic data node configuration.
The default value is Scope.SCENARIO.

None
validity_period Optional[timedelta]

The duration since the last edit date for which the data node can be considered up-to-date. Once the validity period has passed, the data node is considered stale and relevant tasks will run even if they are skippable (see the Task configuration page for more details). If validity_period is set to None, the data node is always up-to-date.

None
**properties dict[str, any]

A keyworded variable length list of additional arguments.

{}

Returns:

Type Description
DataNodeConfig

The new Generic data node configuration.

configure_global_app() classmethod

configure_global_app(**properties) -> GlobalAppConfig

Configure the global application.

Parameters:

Name Type Description Default
**properties Dict[str, Any]

A dictionary of additional properties.

{}

Returns:

Type Description
GlobalAppConfig

The global application configuration.

configure_gui() staticmethod

configure_gui(**properties) -> _GuiSection

Configure the Graphical User Interface.

Parameters:

Name Type Description Default
**properties dict[str, any]

Keyword arguments that configure the behavior of the Gui instances.
Please refer to the gui config section page of the User Manual for more information on the accepted arguments.

{}

Returns:

Type Description
_GuiSection

The GUI configuration.

configure_in_memory_data_node() staticmethod

configure_in_memory_data_node(
    id: str,
    default_data: Optional[Any] = None,
    scope: Optional[Scope] = None,
    validity_period: Optional[timedelta] = None,
    **properties
) -> DataNodeConfig

Configure a new in-memory data node configuration.

Parameters:

Name Type Description Default
id str

The unique identifier of the new in_memory data node configuration.

required
default_data Optional[any]

The default data of the data nodes instantiated from this in_memory data node configuration. If provided, note that the default_data will be stored as a configuration attribute. So it is designed to handle small data values like parameters, and it must be Json serializable.

None
scope Optional[Scope]

The scope of the in_memory data node configuration.
The default value is Scope.SCENARIO.

None
validity_period Optional[timedelta]

The duration since the last edit date for which the data node can be considered up-to-date. Once the validity period has passed, the data node is considered stale and relevant tasks will run even if they are skippable (see the Task configuration page for more details). If validity_period is set to None, the data node is always up-to-date.

None
**properties dict[str, any]

A keyworded variable length list of additional arguments.

{}

Returns:

Type Description
DataNodeConfig

The new in-memory data node configuration.

configure_job_executions() staticmethod

configure_job_executions(
    mode: Optional[str] = None,
    max_nb_of_workers: Optional[Union[int, str]] = None,
    **properties
) -> JobConfig

Configure job execution.

Parameters:

Name Type Description Default
mode Optional[str]

The job execution mode. Possible values are: "standalone" or "development".

None
max_nb_of_workers Optional[int, str]

Parameter used only in "standalone" mode. This indicates the maximum number of jobs able to run in parallel.
The default value is 2.
A string can be provided to dynamically set the value using an environment variable. The string must follow the pattern: ENV[<env_var>] where <env_var> is the name of an environment variable.

None
**properties dict[str, any]

A keyworded variable length list of additional arguments.

{}

Returns:

Type Description
JobConfig

The new job execution configuration.

configure_json_data_node() staticmethod

configure_json_data_node(
    id: str,
    default_path: Optional[str] = None,
    encoding: Optional[str] = None,
    encoder: Optional[json.JSONEncoder] = None,
    decoder: Optional[json.JSONDecoder] = None,
    scope: Optional[Scope] = None,
    validity_period: Optional[timedelta] = None,
    **properties
) -> DataNodeConfig

Configure a new JSON data node configuration.

Parameters:

Name Type Description Default
id str

The unique identifier of the new JSON data node configuration.

required
default_path Optional[str]

The default path of the JSON file.

None
encoding Optional[str]

The encoding of the JSON file.

None
encoder Optional[JSONEncoder]

The JSON encoder used to write data into the JSON file.

None
decoder Optional[JSONDecoder]

The JSON decoder used to read data from the JSON file.

None
scope Optional[Scope]

The scope of the JSON data node configuration.
The default value is Scope.SCENARIO.

None
validity_period Optional[timedelta]

The duration since the last edit date for which the data node can be considered up-to-date. Once the validity period has passed, the data node is considered stale and relevant tasks will run even if they are skippable (see the Task configuration page for more details). If validity_period is set to None, the data node is always up-to-date.

None
**properties dict[str, any]

A keyworded variable length list of additional arguments.

{}

Returns:

Type Description
DataNodeConfig

The new JSON data node configuration.

configure_mongo_collection_data_node() staticmethod

configure_mongo_collection_data_node(
    id: str,
    db_name: str,
    collection_name: str,
    custom_document: Optional[Any] = None,
    db_username: Optional[str] = None,
    db_password: Optional[str] = None,
    db_host: Optional[str] = None,
    db_port: Optional[int] = None,
    db_driver: Optional[str] = None,
    db_extra_args: Optional[Dict[str, Any]] = None,
    scope: Optional[Scope] = None,
    validity_period: Optional[timedelta] = None,
    **properties
) -> DataNodeConfig

Configure a new Mongo collection data node configuration.

Parameters:

Name Type Description Default
id str

The unique identifier of the new Mongo collection data node configuration.

required
db_name str

The database name.

required
collection_name str

The collection in the database to read from and to write the data to.

required
custom_document Optional[any]

The custom document class to store, encode, and decode data when reading and writing to a Mongo collection. The custom_document can have an optional decode() method to decode data in the Mongo collection to a custom object, and an optional encode()) method to encode the object's properties to the Mongo collection when writing.

None
db_username Optional[str]

The database username.

None
db_password Optional[str]

The database password.

None
db_host Optional[str]

The database host.
The default value is "localhost".

None
db_port Optional[int]

The database port.
The default value is 27017.

None
db_driver Optional[str]

The database driver.

None
db_extra_args Optional[dict[str, any]]

A dictionary of additional arguments to be passed into database connection string.

None
scope Optional[Scope]

The scope of the Mongo collection data node configuration.
The default value is Scope.SCENARIO.

None
validity_period Optional[timedelta]

The duration since the last edit date for which the data node can be considered up-to-date. Once the validity period has passed, the data node is considered stale and relevant tasks will run even if they are skippable (see the Task configuration page for more details). If validity_period is set to None, the data node is always up-to-date.

None
**properties dict[str, any]

A keyworded variable length list of additional arguments.

{}

Returns:

Type Description
DataNodeConfig

The new Mongo collection data node configuration.

configure_parquet_data_node() staticmethod

configure_parquet_data_node(
    id: str,
    default_path: Optional[str] = None,
    engine: Optional[str] = None,
    compression: Optional[str] = None,
    read_kwargs: Optional[Dict] = None,
    write_kwargs: Optional[Dict] = None,
    exposed_type: Optional[str] = None,
    scope: Optional[Scope] = None,
    validity_period: Optional[timedelta] = None,
    **properties
) -> DataNodeConfig

Configure a new Parquet data node configuration.

Parameters:

Name Type Description Default
id str

The unique identifier of the new Parquet data node configuration.

required
default_path Optional[str]

The default path of the Parquet file.

None
engine Optional[str]

Parquet library to use. Possible values are "fastparquet" or "pyarrow".
The default value is "pyarrow".

None
compression Optional[str]

Name of the compression to use. Possible values are "snappy", "gzip", "brotli", or "none" (no compression). The default value is "snappy".

None
read_kwargs Optional[dict]

Additional parameters passed to the pandas.read_parquet() function.

None
write_kwargs Optional[dict]

Additional parameters passed to the pandas.DataFrame.write_parquet() function.
The parameters in read_kwargs and write_kwargs have a higher precedence than the top-level parameters which are also passed to Pandas.

None
exposed_type Optional[str]

The exposed type of the data read from Parquet file.
The default value is pandas.

None
scope Optional[Scope]

The scope of the Parquet data node configuration.
The default value is Scope.SCENARIO.

None
validity_period Optional[timedelta]

The duration since the last edit date for which the data node can be considered up-to-date. Once the validity period has passed, the data node is considered stale and relevant tasks will run even if they are skippable (see the Task configuration page for more details). If validity_period is set to None, the data node is always up-to-date.

None
**properties dict[str, any]

A keyworded variable length list of additional arguments.

{}

Returns:

Type Description
DataNodeConfig

The new Parquet data node configuration.

configure_pickle_data_node() staticmethod

configure_pickle_data_node(
    id: str,
    default_path: Optional[str] = None,
    default_data: Optional[Any] = None,
    scope: Optional[Scope] = None,
    validity_period: Optional[timedelta] = None,
    **properties
) -> DataNodeConfig

Configure a new pickle data node configuration.

Parameters:

Name Type Description Default
id str

The unique identifier of the new pickle data node configuration.

required
default_path Optional[str]

The path of the pickle file.

None
default_data Optional[any]

The default data of the data nodes instantiated from this pickle data node configuration. If provided, note that the default_data will be stored as a configuration attribute. So it is designed to handle small data values like parameters, and it must be Json serializable.

None
scope Optional[Scope]

The scope of the pickle data node configuration.
The default value is Scope.SCENARIO.

None
validity_period Optional[timedelta]

The duration since the last edit date for which the data node can be considered up-to-date. Once the validity period has passed, the data node is considered stale and relevant tasks will run even if they are skippable (see the Task configuration page for more details). If validity_period is set to None, the data node is always up-to-date.

None
**properties dict[str, any]

A keyworded variable length list of additional arguments.

{}

Returns:

Type Description
DataNodeConfig

The new pickle data node configuration.

configure_s3_object_data_node() staticmethod

configure_s3_object_data_node(
    id: str,
    aws_access_key: str,
    aws_secret_access_key: str,
    aws_s3_bucket_name: str,
    aws_s3_object_key: str,
    aws_region: Optional[str] = None,
    aws_s3_object_parameters: Optional[
        Dict[str, Any]
    ] = None,
    scope: Optional[Scope] = None,
    validity_period: Optional[timedelta] = None,
    **properties
) -> DataNodeConfig

Configure a new S3 object data node configuration.

Parameters:

Name Type Description Default
id str

The unique identifier of the new S3 Object data node configuration.

required
aws_access_key str

Amazon Web Services ID for to identify account.

required
aws_secret_access_key str

Amazon Web Services access key to authenticate programmatic requests.

required
aws_s3_bucket_name str

The bucket in S3 to read from and to write the data to.

required
aws_region Optional[str]

Self-contained geographic area where Amazon Web Services (AWS) infrastructure is located.

None
aws_s3_object_parameters Optional[dict[str, any]]

A dictionary of additional arguments to be passed into AWS S3 bucket access string.

None
scope Optional[Scope]

The scope of the S3 Object data node configuration.
The default value is Scope.SCENARIO.

None
validity_period Optional[timedelta]

The duration since the last edit date for which the data node can be considered up-to-date. Once the validity period has passed, the data node is considered stale and relevant tasks will run even if they are skippable (see the Task configuration page for more details). If validity_period is set to None, the data node is always up-to-date.

None
**properties dict[str, any]

A keyworded variable length list of additional arguments.

{}

Returns:

Type Description
DataNodeConfig

The new S3 object data node configuration.

configure_scenario() staticmethod

configure_scenario(
    id: str,
    task_configs: Optional[List[TaskConfig]] = None,
    additional_data_node_configs: Optional[
        List[DataNodeConfig]
    ] = None,
    frequency: Optional[Frequency] = None,
    comparators: Optional[
        Dict[str, Union[List[Callable], Callable]]
    ] = None,
    sequences: Optional[Dict[str, List[TaskConfig]]] = None,
    **properties
) -> ScenarioConfig

Configure a new scenario configuration.

Parameters:

Name Type Description Default
id str

The unique identifier of the new scenario configuration.

required
task_configs Optional[List[TaskConfig]]

The list of task configurations used by this scenario configuration. The default value is None.

None
additional_data_node_configs Optional[List[DataNodeConfig]]

The list of additional data nodes related to this scenario configuration. The default value is None.

None
frequency Optional[Frequency]

The scenario frequency.
It corresponds to the recurrence of the scenarios instantiated from this configuration. Based on this frequency each scenario will be attached to the relevant cycle.

None
comparators Optional[Dict[str, Union[List[Callable], Callable]]]

The list of functions used to compare scenarios. A comparator function is attached to a scenario's data node configuration. The key of the dictionary parameter corresponds to the data node configuration id. During the scenarios' comparison, each comparator is applied to all the data nodes instantiated from the data node configuration attached to the comparator. See compare_scenarios() more more details.

None
sequences Optional[Dict[str, List[TaskConfig]]]

Dictionary of sequence descriptions. The default value is None.

None
**properties dict[str, any]

A keyworded variable length list of additional arguments.

{}

Returns:

Type Description
ScenarioConfig

The new scenario configuration.

configure_sql_data_node() staticmethod

configure_sql_data_node(
    id: str,
    db_name: str,
    db_engine: str,
    read_query: str,
    write_query_builder: Callable,
    append_query_builder: Optional[Callable] = None,
    db_username: Optional[str] = None,
    db_password: Optional[str] = None,
    db_host: Optional[str] = None,
    db_port: Optional[int] = None,
    db_driver: Optional[str] = None,
    sqlite_folder_path: Optional[str] = None,
    sqlite_file_extension: Optional[str] = None,
    db_extra_args: Optional[Dict[str, Any]] = None,
    exposed_type: Optional[str] = None,
    scope: Optional[Scope] = None,
    validity_period: Optional[timedelta] = None,
    **properties
) -> DataNodeConfig

Configure a new SQL data node configuration.

Parameters:

Name Type Description Default
id str

The unique identifier of the new SQL data node configuration.

required
db_name str

The database name, or the name of the SQLite database file.

required
db_engine str

The database engine. Possible values are "sqlite", "mssql", "mysql", or "postgresql".

required
read_query str

The SQL query string used to read the data from the database.

required
write_query_builder Callable

A callback function that takes the data as an input parameter and returns a list of SQL queries to be executed when writing data to the data node.

required
append_query_builder Optional[Callable]

A callback function that takes the data as an input parameter and returns a list of SQL queries to be executed when appending data to the data node.

None
db_username Optional[str]

The database username. Required by the "mssql", "mysql", and "postgresql" engines.

None
db_password Optional[str]

The database password. Required by the "mssql", "mysql", and "postgresql" engines.

None
db_host Optional[str]

The database host.
The default value is "localhost".

None
db_port Optional[int]

The database port.
The default value is 1433.

None
db_driver Optional[str]

The database driver.

None
sqlite_folder_path Optional[str]

The path to the folder that contains SQLite file.
The default value is the current working folder.

None
sqlite_file_extension Optional[str]

The file extension of the SQLite file.
The default value is ".db".

None
db_extra_args Optional[dict[str, any]]

A dictionary of additional arguments to be passed into database connection string.

None
exposed_type Optional[str]

The exposed type of the data read from SQL query.
The default value is "pandas".

None
scope Optional[Scope]

The scope of the SQL data node configuration.
The default value is Scope.SCENARIO.

None
validity_period Optional[timedelta]

The duration since the last edit date for which the data node can be considered up-to-date. Once the validity period has passed, the data node is considered stale and relevant tasks will run even if they are skippable (see the Task configuration page for more details). If validity_period is set to None, the data node is always up-to-date.

None
**properties dict[str, any]

A keyworded variable length list of additional arguments.

{}

Returns:

Type Description
DataNodeConfig

The new SQL data node configuration.

configure_sql_table_data_node() staticmethod

configure_sql_table_data_node(
    id: str,
    db_name: str,
    db_engine: str,
    table_name: str,
    db_username: Optional[str] = None,
    db_password: Optional[str] = None,
    db_host: Optional[str] = None,
    db_port: Optional[int] = None,
    db_driver: Optional[str] = None,
    sqlite_folder_path: Optional[str] = None,
    sqlite_file_extension: Optional[str] = None,
    db_extra_args: Optional[Dict[str, Any]] = None,
    exposed_type: Optional[str] = None,
    scope: Optional[Scope] = None,
    validity_period: Optional[timedelta] = None,
    **properties
) -> DataNodeConfig

Configure a new SQL table data node configuration.

Parameters:

Name Type Description Default
id str

The unique identifier of the new SQL data node configuration.

required
db_name str

The database name, or the name of the SQLite database file.

required
db_engine str

The database engine. Possible values are "sqlite", "mssql", "mysql", or "postgresql".

required
table_name str

The name of the SQL table.

required
db_username Optional[str]

The database username. Required by the "mssql", "mysql", and "postgresql" engines.

None
db_password Optional[str]

The database password. Required by the "mssql", "mysql", and "postgresql" engines.

None
db_host Optional[str]

The database host.
The default value is "localhost".

None
db_port Optional[int]

The database port.
The default value is 1433.

None
db_driver Optional[str]

The database driver.

None
sqlite_folder_path Optional[str]

The path to the folder that contains SQLite file.
The default value is the current working folder.

None
sqlite_file_extension Optional[str]

The file extension of the SQLite file.
The default value is ".db".

None
db_extra_args Optional[dict[str, any]]

A dictionary of additional arguments to be passed into database connection string.

None
exposed_type Optional[str]

The exposed type of the data read from SQL table.
The default value is "pandas".

None
scope Optional[Scope]

The scope of the SQL data node configuration.
The default value is Scope.SCENARIO.

None
validity_period Optional[timedelta]

The duration since the last edit date for which the data node can be considered up-to-date. Once the validity period has passed, the data node is considered stale and relevant tasks will run even if they are skippable (see the Task configuration page for more details). If validity_period is set to None, the data node is always up-to-date.

None
**properties dict[str, any]

A keyworded variable length list of additional arguments.

{}

Returns:

Type Description
DataNodeConfig

The new SQL data node configuration.

configure_task() staticmethod

configure_task(
    id: str,
    function: Optional[Callable],
    input: Optional[
        Union[DataNodeConfig, List[DataNodeConfig]]
    ] = None,
    output: Optional[
        Union[DataNodeConfig, List[DataNodeConfig]]
    ] = None,
    skippable: bool = False,
    **properties
) -> TaskConfig

Configure a new task configuration.

Parameters:

Name Type Description Default
id str

The unique identifier of this task configuration.

required
function Callable

The python function called by Taipy to run the task.

required
input Optional[Union[DataNodeConfig, List[DataNodeConfig]]]

The list of the function input data node configurations. This can be a unique data node configuration if there is a single input data node, or None if there are none.

None
output Optional[Union[DataNodeConfig, List[DataNodeConfig]]]

The list of the function output data node configurations. This can be a unique data node configuration if there is a single output data node, or None if there are none.

None
skippable bool

If True, indicates that the task can be skipped if no change has been made on inputs.
The default value is False.

False
**properties dict[str, any]

A keyworded variable length list of additional arguments.

{}

Returns:

Type Description
TaskConfig

The new task configuration.

configure_telemetry() staticmethod

configure_telemetry(
    enabled: Optional[bool] = None,
    service_name: Optional[str] = None,
    otel_endpoint: Optional[str] = None,
    **properties
) -> TelemetrySection

Configure the Telemetry service.

Create a telemetry service section in the Taipy Config, holding attributes to configure the telemetry. When enabled, the Taipy application will connect to the Open Telemetry endpoint specified (or "localhost" if not) to send metrics.

Parameters:

Name Type Description Default
enabled Optional[bool]

Enable telemetry. If True, the telemetry is activated The default is "false".

None
service_name Optional[str]

The service name.

None
otel_endpoint Optional[str]

The Open Telemetry endpoint.

None
**properties Dict[str, Any]

A dictionary of additional properties.

{}

Returns:

Type Description
TelemetrySection

The Telemetry Section.

export() classmethod

export(filename: str) -> None

Export a configuration.

The export is done in a toml file. The exported configuration is taken from the Python code configuration.

Note

If filename already exists, it is overwritten.

Parameters:

Name Type Description Default
filename Union[str, Path]

The path of the file to export.

required

load() classmethod

load(filename: str) -> None

Load a configuration file.

The current Python configuration is replaced and the Config compilation is triggered.

Parameters:

Name Type Description Default
filename Union[str, Path]

The path of the toml configuration file to load.

required

override() classmethod

override(filename: str) -> None

Load a configuration from a file and overrides the current config.

Parameters:

Name Type Description Default
filename Union[str, Path]

The path of the toml configuration file to load.

required

restore() classmethod

restore(filename: str) -> None

Restore a configuration file and replace the current applied configuration.

Parameters:

Name Type Description Default
filename Union[str, Path]

The path of the toml configuration file to load.

required

set_default_data_node_configuration() staticmethod

set_default_data_node_configuration(
    storage_type: str,
    scope: Optional[Scope] = None,
    validity_period: Optional[timedelta] = None,
    **properties
) -> DataNodeConfig

Set the default values for data node configurations.

This function creates the default data node configuration object, where all data node configuration objects will find their default values when needed.

Parameters:

Name Type Description Default
storage_type str

The default storage type for all data node configurations. The possible values are "pickle" (the default value), "csv", "excel", "sql", "mongo_collection", "in_memory", "json", "parquet", "generic", or "s3_object".

required
scope Optional[Scope]

The default scope for all data node configurations.
The default value is Scope.SCENARIO.

None
validity_period Optional[timedelta]

The duration since the last edit date for which the data node can be considered up-to-date. Once the validity period has passed, the data node is considered stale and relevant tasks will run even if they are skippable (see the Task configuration page for more details). If validity_period is set to None, the data node is always up-to-date.

None
**properties dict[str, any]

A keyworded variable length list of additional arguments.

{}

Returns:

Type Description
DataNodeConfig

The default data node configuration.

set_default_scenario_configuration() staticmethod

set_default_scenario_configuration(
    task_configs: Optional[List[TaskConfig]] = None,
    additional_data_node_configs: List[
        DataNodeConfig
    ] = None,
    frequency: Optional[Frequency] = None,
    comparators: Optional[
        Dict[str, Union[List[Callable], Callable]]
    ] = None,
    sequences: Optional[Dict[str, List[TaskConfig]]] = None,
    **properties
) -> ScenarioConfig

Set the default values for scenario configurations.

This function creates the default scenario configuration object, where all scenario configuration objects will find their default values when needed.

Parameters:

Name Type Description Default
task_configs Optional[List[TaskConfig]]

The list of task configurations used by this scenario configuration.

None
additional_data_node_configs Optional[List[DataNodeConfig]]

The list of additional data nodes related to this scenario configuration.

None
frequency Optional[Frequency]

The scenario frequency. It corresponds to the recurrence of the scenarios instantiated from this configuration. Based on this frequency each scenario will be attached to the relevant cycle.

None
comparators Optional[Dict[str, Union[List[Callable], Callable]]]

The list of functions used to compare scenarios. A comparator function is attached to a scenario's data node configuration. The key of the dictionary parameter corresponds to the data node configuration id. During the scenarios' comparison, each comparator is applied to all the data nodes instantiated from the data node configuration attached to the comparator. See taipy.compare_scenarios() more more details.

None
sequences Optional[Dict[str, List[TaskConfig]]]

Dictionary of sequences. The default value is None.

None
**properties dict[str, any]

A keyworded variable length list of additional arguments.

{}

Returns:

Type Description
ScenarioConfig

The new default scenario configuration.

set_default_task_configuration() staticmethod

set_default_task_configuration(
    function: Optional[Callable],
    input: Optional[
        Union[DataNodeConfig, List[DataNodeConfig]]
    ] = None,
    output: Optional[
        Union[DataNodeConfig, List[DataNodeConfig]]
    ] = None,
    skippable: bool = False,
    **properties
) -> TaskConfig

Set the default values for task configurations.

This function creates the default task configuration object, where all task configuration objects will find their default values when needed.

Parameters:

Name Type Description Default
function Callable

The python function called by Taipy to run the task.

required
input Optional[Union[DataNodeConfig, List[DataNodeConfig]]]

The list of the input data node configurations. This can be a unique data node configuration if there is a single input data node, or None if there are none.

None
output Optional[Union[DataNodeConfig, List[DataNodeConfig]]]

The list of the output data node configurations. This can be a unique data node configuration if there is a single output data node, or None if there are none.

None
skippable bool

If True, indicates that the task can be skipped if no change has been made on inputs.
The default value is False.

False
**properties dict[str, any]

A keyworded variable length list of additional arguments.

{}

Returns:

Type Description
TaskConfig

The default task configuration.

unblock_update() classmethod

unblock_update() -> None

Unblock update on the configuration signgleton.