taipy.config.Config
Configuration singleton.
backup(filename)
classmethod
¶
Backup a configuration.
The backup is done in a toml file.
The backed up configuration is a compilation from the three possible methods to configure the application: the python code configuration, the file configuration and the environment configuration.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
filename |
Union[str, Path]
|
The path of the file to export. |
required |
Note
If filename already exists, it is overwritten.
block_update()
classmethod
¶
Block update on the configuration signgleton.
check()
classmethod
¶
Check configuration.
This method logs issue messages and returns an issue collector.
Returns:
Type | Description |
---|---|
IssueCollector
|
Collector containing the info, warning and error issues. |
configure_authentication(protocol, secret_key=None, auth_session_duration=3600, **properties)
staticmethod
¶
Configure authentication.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
protocol |
str
|
The name of the protocol to configure ("ldap", "taipy" or "none"). |
required |
secret_key |
str
|
A secret string used to internally encrypt the credentials' information. If no value is provided, the first run-time authentication sets the default value to a random text string. |
None
|
auth_session_duration |
int
|
How long, in seconds, are credentials valid after their creation. The default value is 3600, corresponding to an hour. |
3600
|
**properties |
Dict[str, Any]
|
A keyworded variable length list of additional arguments.
|
{}
|
Returns:
Type | Description |
---|---|
|
configure_csv_data_node(id, default_path=None, has_header=True, exposed_type='pandas', scope=Scope.SCENARIO, **properties)
staticmethod
¶
Configure a new CSV data node configuration.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
id |
str
|
The unique identifier of the new CSV data node configuration. |
required |
default_path |
str
|
The default path of the CSV file. |
None
|
has_header |
bool
|
If True, indicates that the CSV file has a header. |
True
|
exposed_type |
The exposed type of the data read from CSV file. The default value is |
'pandas'
|
|
scope |
Scope
|
The scope of the CSV data node configuration. The default value
is |
Scope.SCENARIO
|
**properties |
Dict[str, Any]
|
A keyworded variable length list of additional arguments. |
{}
|
Returns:
Type | Description |
---|---|
DataNodeConfig
|
|
configure_data_node(id, storage_type=None, scope=Scope.SCENARIO, **properties)
staticmethod
¶
Configure a new data node configuration.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
id |
str
|
The unique identifier of the new data node configuration. |
required |
storage_type |
Optional[str]
|
The data node configuration storage type. The possible values
are None (which is the default value of "pickle", unless it has been overloaded by the
storage_type value set in the default data node configuration
(see |
None
|
scope |
Scope
|
The scope of the data node configuration. The default value is
|
Scope.SCENARIO
|
**properties |
Dict[str, Any]
|
A keyworded variable length list of additional arguments. |
{}
|
Returns:
Type | Description |
---|---|
DataNodeConfig
|
|
configure_default_data_node(storage_type, scope=Scope.SCENARIO, **properties)
staticmethod
¶
Configure the default values for data node configurations. This function creates the default data node configuration object, where all data node configuration objects will find their default values when needed.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
storage_type |
str
|
The default storage type for all data node configurations. The possible values are "pickle" (the default value), "csv", "excel", "sql", "mongo_collection", "in_memory", "json", "parquet" or "generic". |
required |
scope |
Scope
|
The default scope for all data node configurations.
The default value is |
Scope.SCENARIO
|
**properties |
Dict[str, Any]
|
A keyworded variable length list of additional arguments. |
{}
|
Returns:
Type | Description |
---|---|
DataNodeConfig
|
|
configure_default_pipeline(task_configs, **properties)
staticmethod
¶
Configure the default values for pipeline configurations.
This function creates the default pipeline configuration object, where all pipeline configuration objects will find their default values when needed.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
task_configs |
Union[TaskConfig, List[TaskConfig]]
|
The list of the task configurations that make the default pipeline configuration. This can be a single task configuration object is this pipeline holds a single task. |
required |
**properties |
Dict[str, Any]
|
A keyworded variable length list of additional arguments. |
{}
|
Returns:
Type | Description |
---|---|
PipelineConfig
|
|
configure_default_scenario(pipeline_configs, frequency=None, comparators=None, **properties)
staticmethod
¶
Configure the default values for scenario configurations.
This function creates the default scenario configuration object, where all scenario configuration objects will find their default values when needed.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
pipeline_configs |
List[PipelineConfig]
|
The list of pipeline configurations used by this scenario configuration. |
required |
frequency |
Optional[Frequency]
|
The scenario frequency. It corresponds to the recurrence of the scenarios instantiated from this configuration. Based on this frequency each scenario will be attached to the relevant cycle. |
None
|
comparators |
Optional[Dict[str, Union[List[Callable], Callable]]]
|
The list of
functions used to compare scenarios. A comparator function is attached to a
scenario's data node configuration. The key of the dictionary parameter
corresponds to the data node configuration id. During the scenarios'
comparison, each comparator is applied to all the data nodes instantiated from
the data node configuration attached to the comparator. See
|
None
|
**properties |
Dict[str, Any]
|
A keyworded variable length list of additional arguments. |
{}
|
Returns:
Type | Description |
---|---|
ScenarioConfig
|
|
configure_default_task(function, input=None, output=None, skippable=False, **properties)
staticmethod
¶
Configure the default values for task configurations.
This function creates the default task configuration object, where all task configuration objects will find their default values when needed.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
function |
Callable
|
The python function called by Taipy to run the task. |
required |
input |
Optional[Union[DataNodeConfig, List[DataNodeConfig]]]
|
The list of the input data node configurations. This can be a unique data node configuration if there is a single input data node, or None if there are none. |
None
|
output |
Optional[Union[DataNodeConfig, List[DataNodeConfig]]]
|
The list of the output data node configurations. This can be a unique data node configuration if there is a single output data node, or None if there are none. |
None
|
skippable |
bool
|
If True, indicates that the task can be skipped if no change has been made on inputs. The default value is False. |
False
|
**properties |
Dict[str, Any]
|
A keyworded variable length list of additional arguments. |
{}
|
Returns:
Type | Description |
---|---|
TaskConfig
|
|
configure_excel_data_node(id, default_path=None, has_header=True, sheet_name=None, exposed_type='pandas', scope=Scope.SCENARIO, **properties)
staticmethod
¶
Configure a new Excel data node configuration.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
id |
str
|
The unique identifier of the new Excel data node configuration. |
required |
default_path |
str
|
The path of the Excel file. |
None
|
has_header |
bool
|
If True, indicates that the Excel file has a header. |
True
|
sheet_name |
Union[List[str], str]
|
The list of sheet names to be used. This can be a unique name. |
None
|
exposed_type |
The exposed type of the data read from Excel file. The default value is |
'pandas'
|
|
scope |
Scope
|
The scope of the Excel data node configuration. The default
value is |
Scope.SCENARIO
|
**properties |
Dict[str, Any]
|
A keyworded variable length list of additional arguments. |
{}
|
Returns:
Type | Description |
---|---|
DataNodeConfig
|
|
configure_generic_data_node(id, read_fct=None, write_fct=None, read_fct_params=None, write_fct_params=None, scope=Scope.SCENARIO, **properties)
staticmethod
¶
Configure a new generic data node configuration.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
id |
str
|
The unique identifier of the new generic data node configuration. |
required |
read_fct |
Optional[Callable]
|
The Python function called to read the data. |
None
|
write_fct |
Optional[Callable]
|
The Python function called to write the data. The provided function must have at least one parameter that receives the data to be written. |
None
|
read_fct_params |
Optional[List]
|
The parameters that are passed to read_fct to read the data. |
None
|
write_fct_params |
Optional[List]
|
The parameters that are passed to write_fct to write the data. |
None
|
scope |
Optional[Scope]
|
The scope of the Generic data node configuration.
The default value is |
Scope.SCENARIO
|
**properties |
Dict[str, Any]
|
A keyworded variable length list of additional arguments. |
{}
|
Returns:
Type | Description |
---|---|
DataNodeConfig
|
|
configure_global_app(root_folder=None, storage_folder=None, clean_entities_enabled=None, **properties)
classmethod
¶
Configure the global application.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
root_folder |
Optional[str]
|
The path of the base folder for the Taipy application. |
None
|
storage_folder |
Optional[str]
|
The folder name used to store Taipy data.
It is used in conjunction with the root_folder field: the storage path is
" |
None
|
clean_entities_enabled |
Optional[str]
|
The field to activate or deactivate the 'clean entities' feature. The default value is False. |
None
|
Returns:
Type | Description |
---|---|
GlobalAppConfig
|
The global application configuration. |
configure_gui(**properties)
staticmethod
¶
Configure the Graphical User Interface.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
**properties |
Dict[str, Any]
|
Keyword arguments that configure the behavior of the |
{}
|
Returns:
Type | Description |
---|---|
|
configure_in_memory_data_node(id, default_data=None, scope=Scope.SCENARIO, **properties)
staticmethod
¶
Configure a new in_memory data node configuration.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
id |
str
|
The unique identifier of the new in_memory data node configuration. |
required |
default_data |
Optional[Any]
|
The default data of the data nodes instantiated from this in_memory data node configuration. |
None
|
scope |
Scope
|
The scope of the in_memory data node configuration. The default
value is |
Scope.SCENARIO
|
**properties |
Dict[str, Any]
|
A keyworded variable length list of additional arguments. |
{}
|
Returns:
Type | Description |
---|---|
DataNodeConfig
|
|
configure_job_executions(mode=None, nb_of_workers=None, max_nb_of_workers=None, **properties)
staticmethod
¶
Configure job execution.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
mode |
Optional[str]
|
The job execution mode. Possible values are: "standalone" (the default value) or "development". |
None
|
max_nb_of_workers |
Optional[int, str]
|
Parameter used only in default "standalone" mode. The maximum
number of jobs able to run in parallel. The default value is 1. |
None
|
nb_of_workers |
Optional[int, str]
|
Deprecated. Use max_nb_of_workers instead. |
None
|
Returns:
Type | Description |
---|---|
JobConfig
|
|
configure_json_data_node(id, default_path=None, encoder=None, decoder=None, scope=Scope.SCENARIO, **properties)
staticmethod
¶
Configure a new JSON data node configuration.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
id |
str
|
The unique identifier of the new JSON data node configuration. |
required |
default_path |
str
|
The default path of the JSON file. |
None
|
encoder |
json.JSONEncoder
|
The JSON encoder used to write data into the JSON file. |
None
|
decoder |
json.JSONDecoder
|
The JSON decoder used to read data from the JSON file. |
None
|
scope |
Scope
|
The scope of the JSON data node configuration. The default value
is |
Scope.SCENARIO
|
**properties |
Dict[str, Any]
|
A keyworded variable length list of additional arguments. |
{}
|
Returns:
Type | Description |
---|---|
DataNodeConfig
|
|
configure_mongo_collection_data_node(id, db_name, collection_name, custom_document=DefaultCustomDocument, db_username='', db_password='', db_host='localhost', db_port=27017, db_extra_args={}, scope=Scope.SCENARIO, **properties)
staticmethod
¶
Configure a new Mongo collection data node configuration.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
id |
str
|
The unique identifier of the new Mongo collection data node configuration. |
required |
db_name |
str
|
The database name. |
required |
collection_name |
str
|
The collection in the database to read from and to write the data to. |
required |
custom_document |
Any
|
The custom document class to store, encode, and decode data when reading and writing
to a Mongo collection. The custom_document can have optional |
DefaultCustomDocument
|
db_username |
str
|
The database username. |
''
|
db_password |
str
|
The database password. |
''
|
db_host |
str
|
The database host. The default value is "localhost". |
'localhost'
|
db_port |
int
|
The database port. The default value is 27017. |
27017
|
db_extra_args |
Dict[str, Any]
|
A dictionary of additional arguments to be passed into database connection string. |
{}
|
scope |
Scope
|
The scope of the Mongo collection data node configuration. The default value is
|
Scope.SCENARIO
|
**properties |
Dict[str, Any]
|
A keyworded variable length list of additional arguments. |
{}
|
Returns:
Type | Description |
---|---|
DataNodeConfig
|
|
configure_parquet_data_node(id, default_path=None, exposed_type='pandas', engine='pyarrow', compression='snappy', read_kwargs={}, write_kwargs={}, scope=Scope.SCENARIO, **properties)
staticmethod
¶
Configure a new Parquet data node configuration.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
id |
str
|
The unique identifier of the new Parquet data node configuration. |
required |
default_path |
str
|
The default path of the Parquet file. |
None
|
exposed_type |
The exposed type of the data read from Parquet file. The default value is |
'pandas'
|
|
engine |
Optional[str]
|
Parquet library to use. Possible values are "fastparquet" or "pyarrow". The default value is "pyarrow". |
'pyarrow'
|
compression |
Optional[str]
|
Name of the compression to use. Use None for no compression.
|
'snappy'
|
read_kwargs |
Optional[Dict]
|
Additional parameters passed to the |
{}
|
write_kwargs |
Optional[Dict]
|
Additional parameters passed to the |
{}
|
scope |
Scope
|
The scope of the Parquet data node configuration. The default value
is |
Scope.SCENARIO
|
**properties |
Dict[str, Any]
|
A keyworded variable length list of additional arguments. |
{}
|
Returns:
Type | Description |
---|---|
DataNodeConfig
|
|
configure_pickle_data_node(id, default_data=None, scope=Scope.SCENARIO, **properties)
staticmethod
¶
Configure a new pickle data node configuration.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
id |
str
|
The unique identifier of the new pickle data node configuration. |
required |
default_data |
Optional[Any]
|
The default data of the data nodes instantiated from this pickle data node configuration. |
None
|
scope |
Scope
|
The scope of the pickle data node configuration. The default value
is |
Scope.SCENARIO
|
**properties |
Dict[str, Any]
|
A keyworded variable length list of additional arguments. |
{}
|
Returns:
Type | Description |
---|---|
DataNodeConfig
|
|
configure_pipeline(id, task_configs, **properties)
staticmethod
¶
Configure a new pipeline configuration.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
id |
str
|
The unique identifier of the new pipeline configuration. |
required |
task_configs |
Union[TaskConfig, List[TaskConfig]]
|
The list of the task configurations that make this new pipeline. This can be a single task configuration object is this pipeline holds a single task. |
required |
**properties |
Dict[str, Any]
|
A keyworded variable length list of additional arguments. |
{}
|
Returns:
Type | Description |
---|---|
PipelineConfig
|
|
configure_scenario(id, pipeline_configs, frequency=None, comparators=None, **properties)
staticmethod
¶
Configure a new scenario configuration.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
id |
str
|
The unique identifier of the new scenario configuration. |
required |
pipeline_configs |
List[PipelineConfig]
|
The list of pipeline configurations used by this new scenario configuration. |
required |
frequency |
Optional[Frequency]
|
The scenario frequency. It corresponds to the recurrence of the scenarios instantiated from this configuration. Based on this frequency each scenario will be attached to the relevant cycle. |
None
|
comparators |
Optional[Dict[str, Union[List[Callable], Callable]]]
|
The list of
functions used to compare scenarios. A comparator function is attached to a
scenario's data node configuration. The key of the dictionary parameter
corresponds to the data node configuration id. During the scenarios'
comparison, each comparator is applied to all the data nodes instantiated from
the data node configuration attached to the comparator. See
|
None
|
**properties |
Dict[str, Any]
|
A keyworded variable length list of additional arguments. |
{}
|
Returns:
Type | Description |
---|---|
ScenarioConfig
|
|
configure_scenario_from_tasks(id, task_configs, frequency=None, comparators=None, pipeline_id=None, **properties)
staticmethod
¶
Configure a new scenario configuration made of a single new pipeline configuration.
A new pipeline configuration is created as well. If pipeline_id is not provided, the new pipeline configuration identifier is set to the scenario configuration identifier post-fixed by '_pipeline'.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
id |
str
|
The unique identifier of the scenario configuration. |
required |
task_configs |
List[TaskConfig]
|
The list of task configurations used by the new pipeline configuration that is created. |
required |
frequency |
Optional[Frequency]
|
The scenario frequency. It corresponds to the recurrence of the scenarios instantiated from this configuration. Based on this frequency each scenario will be attached to the relevant cycle. |
None
|
comparators |
Optional[Dict[str, Union[List[Callable], Callable]]]
|
The list of
functions used to compare scenarios. A comparator function is attached to a
scenario's data node configuration. The key of the dictionary parameter
corresponds to the data node configuration id. During the scenarios'
comparison, each comparator is applied to all the data nodes instantiated from
the data node configuration attached to the comparator. See
|
None
|
pipeline_id |
str
|
The identifier of the new pipeline configuration to be configured. |
None
|
**properties |
Dict[str, Any]
|
A keyworded variable length list of additional arguments. |
{}
|
Returns:
Type | Description |
---|---|
ScenarioConfig
|
|
configure_sql_data_node(id, db_username, db_password, db_name, db_engine, db_port=1433, db_host='localhost', db_driver='ODBC Driver 17 for SQL Server', db_extra_args=None, read_query=None, write_query_builder=None, exposed_type='pandas', scope=Scope.SCENARIO, **properties)
staticmethod
¶
Configure a new SQL data node configuration.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
id |
str
|
The unique identifier of the new SQL data node configuration. |
required |
db_username |
str
|
The database username. |
required |
db_password |
str
|
The database password. |
required |
db_name |
str
|
The database name. |
required |
db_engine |
str
|
The database engine. Possible values are "sqlite", "mssql", "mysql", or "postgresql". |
required |
db_port |
int
|
The database port. The default value is 1433. |
1433
|
db_host |
str
|
The database host. The default value is "localhost". |
'localhost'
|
db_driver |
str
|
The database driver. The default value is "ODBC Driver 17 for SQL Server". |
'ODBC Driver 17 for SQL Server'
|
db_extra_args |
Dict[str, Any]
|
A dictionary of additional arguments to be passed into database connection string. |
None
|
read_query |
str
|
The SQL query string used to read the data from the database. |
None
|
write_query_builder |
Callable
|
A callback function that takes the data as an input parameter and returns a list of SQL queries. |
None
|
exposed_type |
The exposed type of the data read from SQL query. The default value is |
'pandas'
|
|
scope |
Scope
|
The scope of the SQL data node configuration. The default value is
|
Scope.SCENARIO
|
**properties |
Dict[str, Any]
|
A keyworded variable length list of additional arguments. |
{}
|
Returns:
Type | Description |
---|---|
DataNodeConfig
|
|
configure_sql_table_data_node(id, db_username, db_password, db_name, db_engine, table_name=None, db_port=1433, db_host='localhost', db_driver='ODBC Driver 17 for SQL Server', db_extra_args=None, exposed_type='pandas', scope=Scope.SCENARIO, **properties)
staticmethod
¶
Configure a new SQL table data node configuration.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
id |
str
|
The unique identifier of the new SQL data node configuration. |
required |
db_username |
str
|
The database username. |
required |
db_password |
str
|
The database password. |
required |
db_name |
str
|
The database name. |
required |
db_host |
str
|
The database host. The default value is "localhost". |
'localhost'
|
db_engine |
str
|
The database engine. Possible values are "sqlite", "mssql", "mysql", or "postgresql". |
required |
db_driver |
str
|
The database driver. The default value is "ODBC Driver 17 for SQL Server". |
'ODBC Driver 17 for SQL Server'
|
db_port |
int
|
The database port. The default value is 1433. |
1433
|
db_extra_args |
Dict[str, Any]
|
A dictionary of additional arguments to be passed into database connection string. |
None
|
table_name |
str
|
The name of the SQL table. |
None
|
exposed_type |
The exposed type of the data read from SQL query. The default value is |
'pandas'
|
|
scope |
Scope
|
The scope of the SQL data node configuration. The default value is
|
Scope.SCENARIO
|
**properties |
Dict[str, Any]
|
A keyworded variable length list of additional arguments. |
{}
|
Returns:
Type | Description |
---|---|
DataNodeConfig
|
|
configure_task(id, function, input=None, output=None, skippable=False, **properties)
staticmethod
¶
Configure a new task configuration.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
id |
str
|
The unique identifier of this task configuration. |
required |
function |
Callable
|
The python function called by Taipy to run the task. |
required |
input |
Optional[Union[DataNodeConfig, List[DataNodeConfig]]]
|
The list of the function input data node configurations. This can be a unique data node configuration if there is a single input data node, or None if there are none. |
None
|
output |
Optional[Union[DataNodeConfig, List[DataNodeConfig]]]
|
The list of the function output data node configurations. This can be a unique data node configuration if there is a single output data node, or None if there are none. |
None
|
skippable |
bool
|
If True, indicates that the task can be skipped if no change has been made on inputs. The default value is False. |
False
|
**properties |
Dict[str, Any]
|
A keyworded variable length list of additional arguments. |
{}
|
Returns:
Type | Description |
---|---|
TaskConfig
|
|
data_nodes()
¶
Return all data node configurations grouped by id in a dictionary.
Config.data_nodes()
is an alias for Config.sections["DATA_NODE"]
export(filename)
classmethod
¶
Export a configuration.
The export is done in a toml file.
The exported configuration is taken from the python code configuration.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
filename |
Union[str, Path]
|
The path of the file to export. |
required |
Note
If filename already exists, it is overwritten.
global_config()
¶
Return configuration values related to the global application as a GlobalAppConfig
.
load(filename)
classmethod
¶
Load a configuration file to replace the current python config and trigger the Config compilation.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
filename |
Union[str, Path]
|
The path of the toml configuration file to load. |
required |
override(filename)
classmethod
¶
Load a configuration from a file and overrides the current config.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
filename |
Union[str, Path]
|
The path of the toml configuration file to load. |
required |
pipelines()
¶
Return all pipeline configurations grouped by id in a dictionary.
Config.pipelines()
is an alias for Config.sections["PIPELINE"]
restore(filename)
classmethod
¶
Restore a configuration file and replace the current applied configuration.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
filename |
Union[str, Path]
|
The path of the toml configuration file to load. |
required |
scenarios()
¶
Return all scenario configurations grouped by id in a dictionary.
Config.scenarios()
is an alias for Config.sections["SCENARIO"]
sections()
¶
Return all non unique sections.
tasks()
¶
Return all task configurations grouped by id in a dictionary.
Config.tasks()
is an alias for Config.sections["TASK"]
unblock_update()
classmethod
¶
Unblock update on the configuration signgleton.
unique_sections()
¶
Return all unique sections.