Skip to contents

This base class is used to create and define .AIFE*Transformer-like classes. It serves as a skeleton for a future concrete transformer and cannot be used to create an object of itself (an attempt to call new-method will produce an error).

See p.1 Base Transformer Class in Transformers for Developers for details.

Create

The create-method is a basic algorithm that is used to create a new transformer, but cannot be called directly.

Train

The train-method is a basic algorithm that is used to train and tune the transformer but cannot be called directly.

Concrete transformer implementation

There are already implemented concrete (child) transformers (e.g. BERT, DeBERTa-V2, etc.), to implement a new one see p.4 Implement A Custom Transformer in Transformers for Developers

References

Hugging Face transformers documantation:

Public fields

params

A list containing transformer's parameters ('static', 'dynamic' and 'dependent' parameters)

list() containing all the transformer parameters. Can be set with set_model_param().

'Static' parameters

Regardless of the transformer, the following parameters are always included:

  • text_dataset

  • sustain_track

  • sustain_iso_code

  • sustain_region

  • sustain_interval

  • trace

  • pytorch_safetensors

  • log_dir

  • log_write_interval

'Dynamic' parameters

In the case of create it also contains (see create-method for details):

  • model_dir

  • vocab_size

  • max_position_embeddings

  • hidden_size

  • hidden_act

  • hidden_dropout_prob

  • attention_probs_dropout_prob

  • intermediate_size

  • num_attention_heads

In the case of train it also contains (see train-method for details):

  • output_dir

  • model_dir_path

  • p_mask

  • whole_word

  • val_size

  • n_epoch

  • batch_size

  • chunk_size

  • min_seq_len

  • full_sequences_only

  • learning_rate

  • n_workers

  • multi_process

  • keras_trace

  • pytorch_trace

'Dependent' parameters

Depending on the transformer and the method used class may contain different parameters:

  • vocab_do_lower_case

  • num_hidden_layer

  • add_prefix_space

  • etc.

temp

A list containing temporary transformer's parameters

list() containing all the temporary local variables that need to be accessed between the step functions. Can be set with set_model_temp().

For example, it can be a variable tok_new that stores the tokenizer from steps_for_creation$create_tokenizer_draft. To train the tokenizer, access the variable tok_new in steps_for_creation$calculate_vocab through the temp list of this class.

Methods


Method new()

An object of this class cannot be created. Thus, method's call will produce an error.

Usage

Returns

This method returns an error.


Method init_transformer()

Method to execute while initializing a new transformer.

Usage

.AIFEBaseTransformer$init_transformer(title, init_trace)

Arguments

title

string A new title.

init_trace

bool option to show prints. If TRUE (by default) - messages will be shown, otherwise (FALSE) - hidden.

Returns

This method returns nothing.s


Method set_title()

Setter for the title. Sets a new value for the title private attribute.

Usage

.AIFEBaseTransformer$set_title(title)

Arguments

title

string A new title.

Returns

This method returns nothing.


Method set_model_param()

Setter for the parameters. Adds a new parameter and its value to the params list.

Usage

.AIFEBaseTransformer$set_model_param(param_name, param_value)

Arguments

param_name

string Parameter's name.

param_value

any Parameter's value.

Returns

This method returns nothing.


Method set_model_temp()

Setter for the temporary model's parameters. Adds a new temporary parameter and its value to the temp list.

Usage

.AIFEBaseTransformer$set_model_temp(temp_name, temp_value)

Arguments

temp_name

string Parameter's name.

temp_value

any Parameter's value.

Returns

This method returns nothing.


Method set_SFC_check_max_pos_emb()

Setter for the check_max_pos_emb element of the private steps_for_creation list. Sets a new fun function as the check_max_pos_emb step.

Usage

.AIFEBaseTransformer$set_SFC_check_max_pos_emb(fun)

Arguments

fun

function() A new function.

Returns

This method returns nothing.


Method set_SFC_create_tokenizer_draft()

Setter for the create_tokenizer_draft element of the private steps_for_creation list. Sets a new fun function as the create_tokenizer_draft step.

Usage

.AIFEBaseTransformer$set_SFC_create_tokenizer_draft(fun)

Arguments

fun

function() A new function.

Returns

This method returns nothing.


Method set_SFC_calculate_vocab()

Setter for the calculate_vocab element of the private steps_for_creation list. Sets a new fun function as the calculate_vocab step.

Usage

.AIFEBaseTransformer$set_SFC_calculate_vocab(fun)

Arguments

fun

function() A new function.

Returns

This method returns nothing.


Method set_SFC_save_tokenizer_draft()

Setter for the save_tokenizer_draft element of the private steps_for_creation list. Sets a new fun function as the save_tokenizer_draft step.

Usage

.AIFEBaseTransformer$set_SFC_save_tokenizer_draft(fun)

Arguments

fun

function() A new function.

Returns

This method returns nothing.


Method set_SFC_create_final_tokenizer()

Setter for the create_final_tokenizer element of the private steps_for_creation list. Sets a new fun function as the create_final_tokenizer step.

Usage

.AIFEBaseTransformer$set_SFC_create_final_tokenizer(fun)

Arguments

fun

function() A new function.

Returns

This method returns nothing.


Method set_SFC_create_transformer_model()

Setter for the create_transformer_model element of the private steps_for_creation list. Sets a new fun function as the create_transformer_model step.

Usage

.AIFEBaseTransformer$set_SFC_create_transformer_model(fun)

Arguments

fun

function() A new function.

Returns

This method returns nothing.


Method set_required_SFC()

Setter for all required elements of the private steps_for_creation list. Executes setters for all required creation steps.

Usage

.AIFEBaseTransformer$set_required_SFC(required_SFC)

Arguments

required_SFC

list() A list of all new required steps.

Returns

This method returns nothing.


Method set_SFT_load_existing_model()

Setter for the load_existing_model element of the private steps_for_training list. Sets a new fun function as the load_existing_model step.

Usage

.AIFEBaseTransformer$set_SFT_load_existing_model(fun)

Arguments

fun

function() A new function.

Returns

This method returns nothing.


Method set_SFT_cuda_empty_cache()

Setter for the cuda_empty_cache element of the private steps_for_training list. Sets a new fun function as the cuda_empty_cache step.

Usage

.AIFEBaseTransformer$set_SFT_cuda_empty_cache(fun)

Arguments

fun

function() A new function.

Returns

This method returns nothing.


Method set_SFT_create_data_collator()

Setter for the create_data_collator element of the private steps_for_training list. Sets a new fun function as the create_data_collator step. Use this method to make a custom data collator for a transformer.

Usage

.AIFEBaseTransformer$set_SFT_create_data_collator(fun)

Arguments

fun

function() A new function.

Returns

This method returns nothing.


Method create()

This method creates a transformer configuration based on the child-transformer architecture and a vocabulary using the python libraries transformers and tokenizers.

This method adds the following parameters to the temp list:

  • log_file

  • raw_text_dataset

  • pt_safe_save

  • value_top

  • total_top

  • message_top

This method uses the following parameters from the temp list:

  • log_file

  • raw_text_dataset

  • tokenizer

Usage

.AIFEBaseTransformer$create(
  model_dir,
  text_dataset,
  vocab_size,
  max_position_embeddings,
  hidden_size,
  num_attention_heads,
  intermediate_size,
  hidden_act,
  hidden_dropout_prob,
  attention_probs_dropout_prob,
  sustain_track,
  sustain_iso_code,
  sustain_region,
  sustain_interval,
  trace,
  pytorch_safetensors,
  log_dir,
  log_write_interval
)

Arguments

model_dir

string Path to the directory where the model should be saved. Allowed values: any

text_dataset

LargeDataSetForText LargeDataSetForText Object storing textual data.

vocab_size

int Size of the vocabulary. Allowed values: 1000 <= x <= 5e+05

max_position_embeddings

int Number of maximum position embeddings. This parameter also determines the maximum length of a sequence which can be processed with the model. Allowed values: 10 <= x <= 4048

hidden_size

int Number of neurons in each layer. This parameter determines the dimensionality of the resulting text embedding. Allowed values: 1 <= x <= 2048

num_attention_heads

int determining the number of attention heads for a self-attention layer. Only relevant if attention_type='multihead' Allowed values: 0 <= x

intermediate_size

int determining the size of the projection layer within a each transformer encoder. Allowed values: 1 <= x

hidden_act

string Name of the activation function. Allowed values: 'gelu', 'relu', 'silu', 'gelu_new'

hidden_dropout_prob

double Ratio of dropout. Allowed values: 0 <= x <= 0.6

attention_probs_dropout_prob

double Ratio of dropout for attention probabilities. Allowed values: 0 <= x <= 0.6

sustain_track

bool If TRUE energy consumption is tracked during training via the python library 'codecarbon'.

sustain_iso_code

string ISO code (Alpha-3-Code) for the country. This variable must be set if sustainability should be tracked. A list can be found on Wikipedia: https://en.wikipedia.org/wiki/List_of_ISO_3166_country_codes. Allowed values: any

sustain_region

string Region within a country. Only available for USA and Canada See the documentation of codecarbon for more information. https://mlco2.github.io/codecarbon/parameters.html Allowed values: any

sustain_interval

int Interval in seconds for measuring power usage. Allowed values: 1 <= x

trace

bool TRUE if information about the estimation phase should be printed to the console.

pytorch_safetensors

bool

  • TRUE: a 'pytorch' model is saved in safetensors format.

  • FALSE (or 'safetensors' is not available): model is saved in the standard pytorch format (.bin).

log_dir

string Path to the directory where the log files should be saved. If no logging is desired set this argument to NULL. Allowed values: any

log_write_interval

`r get_param_doc_desclog_write_interval

Returns

This method does not return an object. Instead, it saves the configuration and vocabulary of the new model to disk.


Method train()

This method can be used to train or fine-tune a transformer based on BERT architecture with the help of the python libraries transformers, datasets, and tokenizers.

This method adds the following parameters to the temp list:

  • log_file

  • loss_file

  • from_pt

  • from_tf

  • load_safe

  • raw_text_dataset

  • pt_safe_save

  • value_top

  • total_top

  • message_top

This method uses the following parameters from the temp list:

  • log_file

  • raw_text_dataset

  • tokenized_dataset

  • tokenizer

Usage

.AIFEBaseTransformer$train(
  output_dir,
  model_dir_path,
  text_dataset,
  p_mask,
  whole_word,
  val_size,
  n_epoch,
  batch_size,
  chunk_size,
  full_sequences_only,
  min_seq_len,
  learning_rate,
  sustain_track,
  sustain_iso_code,
  sustain_region,
  sustain_interval,
  trace,
  pytorch_trace,
  pytorch_safetensors,
  log_dir,
  log_write_interval
)

Arguments

output_dir

string Path to the directory where the model should be saved. Allowed values: any

model_dir_path

string Path to the directory where the original model is stored. Allowed values: any

text_dataset

LargeDataSetForText LargeDataSetForText Object storing textual data.

p_mask

double Ratio that determines the number of words/tokens used for masking. Allowed values: 0 < x < 1

whole_word

bool * TRUE: whole word masking should be applied.

  • FALSE: token masking is used.

val_size

double between 0 and 1, indicating the proportion of cases which should be used for the validation sample during the estimation of the model. The remaining cases are part of the training data. Allowed values: 0 < x < 1

n_epoch

int Number of training epochs. Allowed values: 1 <= x

batch_size

int Size of the batches for training. Allowed values: 1 <= x

chunk_size

int Maximum length of every sequence. Must be equal or less the global maximum size allowed by the model. Allowed values: 100 <= x

full_sequences_only

bool TRUE for using only chunks with a sequence length equal to chunk_size.

min_seq_len

int Only relevant if full_sequences_only = FALSE. Value determines the minimal sequence length included in training process. Allowed values: 10 <= x

learning_rate

double Initial learning rate for the training. Allowed values: 0 < x <= 1

sustain_track

bool If TRUE energy consumption is tracked during training via the python library 'codecarbon'.

sustain_iso_code

string ISO code (Alpha-3-Code) for the country. This variable must be set if sustainability should be tracked. A list can be found on Wikipedia: https://en.wikipedia.org/wiki/List_of_ISO_3166_country_codes. Allowed values: any

sustain_region

string Region within a country. Only available for USA and Canada See the documentation of codecarbon for more information. https://mlco2.github.io/codecarbon/parameters.html Allowed values: any

sustain_interval

int Interval in seconds for measuring power usage. Allowed values: 1 <= x

trace

bool TRUE if information about the estimation phase should be printed to the console.

pytorch_trace

“r get_param_doc_desc("pytorch_trace")`

pytorch_safetensors

bool

  • TRUE: a 'pytorch' model is saved in safetensors format.

  • FALSE (or 'safetensors' is not available): model is saved in the standard pytorch format (.bin).

log_dir

string Path to the directory where the log files should be saved. If no logging is desired set this argument to NULL. Allowed values: any

log_write_interval

`r get_param_doc_desclog_write_interval

Returns

This method does not return an object. Instead, it saves the configuration and vocabulary of the new model to disk.


Method clone()

The objects of this class are cloneable with this method.

Usage

.AIFEBaseTransformer$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.