Skip to contents

This class has the following methods:

  • create: creates a new transformer based on RoBERTa.

  • train: trains and fine-tunes a RoBERTa model.

Create

New models can be created using the .AIFERobertaTransformer$create method.

Train

To train the model, pass the directory of the model to the method .AIFERobertaTransformer$train.

Pre-Trained models which can be fine-tuned with this function are available at https://huggingface.co/.

Training of this model makes use of dynamic masking.

References

Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., & Stoyanov, V. (2019). RoBERTa: A Robustly Optimized BERT Pretraining Approach. doi:10.48550/arXiv.1907.11692

Hugging Face Documentation

Super class

aifeducation::.AIFEBaseTransformer -> .AIFERobertaTransformer

Methods

Inherited methods


Method new()

Creates a new transformer based on RoBERTa and sets the title.

Returns

This method returns nothing.


Method create()

This method creates a transformer configuration based on the RoBERTa base architecture and a vocabulary based on Byte-Pair Encoding (BPE) tokenizer using the python transformers and tokenizers libraries.

This method adds the following 'dependent' parameters to the base class' inherited params list:

  • add_prefix_space

  • trim_offsets

  • num_hidden_layer

Usage

.AIFERobertaTransformer$create(
  ml_framework = "pytorch",
  model_dir,
  text_dataset,
  vocab_size = 30522,
  add_prefix_space = FALSE,
  trim_offsets = TRUE,
  max_position_embeddings = 512,
  hidden_size = 768,
  num_hidden_layer = 12,
  num_attention_heads = 12,
  intermediate_size = 3072,
  hidden_act = "gelu",
  hidden_dropout_prob = 0.1,
  attention_probs_dropout_prob = 0.1,
  sustain_track = TRUE,
  sustain_iso_code = NULL,
  sustain_region = NULL,
  sustain_interval = 15,
  trace = TRUE,
  pytorch_safetensors = TRUE,
  log_dir = NULL,
  log_write_interval = 2
)

Arguments

ml_framework

string Framework to use for training and inference.

  • ml_framework = "tensorflow": for 'tensorflow'.

  • ml_framework = "pytorch": for 'pytorch'.

model_dir

string Path to the directory where the model should be saved.

text_dataset

Object of class LargeDataSetForText.

vocab_size

int Size of the vocabulary.

add_prefix_space

bool TRUE if an additional space should be inserted to the leading words.

trim_offsets

bool TRUE trims the whitespaces from the produced offsets.

max_position_embeddings

int Number of maximum position embeddings. This parameter also determines the maximum length of a sequence which can be processed with the model.

hidden_size

int Number of neurons in each layer. This parameter determines the dimensionality of the resulting text embedding.

num_hidden_layer

int Number of hidden layers.

num_attention_heads

int Number of attention heads.

intermediate_size

int Number of neurons in the intermediate layer of the attention mechanism.

hidden_act

string Name of the activation function.

hidden_dropout_prob

double Ratio of dropout.

attention_probs_dropout_prob

double Ratio of dropout for attention probabilities.

sustain_track

bool If TRUE energy consumption is tracked during training via the python library codecarbon.

sustain_iso_code

string ISO code (Alpha-3-Code) for the country. This variable must be set if sustainability should be tracked. A list can be found on Wikipedia: https://en.wikipedia.org/wiki/List_of_ISO_3166_country_codes.

sustain_region

string Region within a country. Only available for USA and Canada. See the documentation of codecarbon for more information https://mlco2.github.io/codecarbon/parameters.html.

sustain_interval

integer Interval in seconds for measuring power usage.

trace

bool TRUE if information about the progress should be printed to the console.

pytorch_safetensors

bool Only relevant for pytorch models.

  • TRUE: a 'pytorch' model is saved in safetensors format.

  • FALSE (or 'safetensors' is not available): model is saved in the standard pytorch format (.bin).

log_dir

Path to the directory where the log files should be saved.

log_write_interval

int Time in seconds determining the interval in which the logger should try to update the log files. Only relevant if log_dir is not NULL.

Returns

This method does not return an object. Instead, it saves the configuration and vocabulary of the new model to disk.


Method train()

This method can be used to train or fine-tune a transformer based on RoBERTa Transformer architecture with the help of the python libraries transformers, datasets, and tokenizers.

Usage

.AIFERobertaTransformer$train(
  ml_framework = "pytorch",
  output_dir,
  model_dir_path,
  text_dataset,
  p_mask = 0.15,
  val_size = 0.1,
  n_epoch = 1,
  batch_size = 12,
  chunk_size = 250,
  full_sequences_only = FALSE,
  min_seq_len = 50,
  learning_rate = 0.03,
  n_workers = 1,
  multi_process = FALSE,
  sustain_track = TRUE,
  sustain_iso_code = NULL,
  sustain_region = NULL,
  sustain_interval = 15,
  trace = TRUE,
  keras_trace = 1,
  pytorch_trace = 1,
  pytorch_safetensors = TRUE,
  log_dir = NULL,
  log_write_interval = 2
)

Arguments

ml_framework

string Framework to use for training and inference.

  • ml_framework = "tensorflow": for 'tensorflow'.

  • ml_framework = "pytorch": for 'pytorch'.

output_dir

string Path to the directory where the final model should be saved. If the directory does not exist, it will be created.

model_dir_path

string Path to the directory where the original model is stored.

text_dataset

Object of class LargeDataSetForText.

p_mask

double Ratio that determines the number of words/tokens used for masking.

val_size

double Ratio that determines the amount of token chunks used for validation.

n_epoch

int Number of epochs for training.

batch_size

int Size of batches.

chunk_size

int Size of every chunk for training.

full_sequences_only

bool TRUE for using only chunks with a sequence length equal to chunk_size.

min_seq_len

int Only relevant if full_sequences_only = FALSE. Value determines the minimal sequence length included in training process.

learning_rate

double Learning rate for adam optimizer.

n_workers

int Number of workers. Only relevant if ml_framework = "tensorflow".

multi_process

bool TRUE if multiple processes should be activated. Only relevant if ml_framework = "tensorflow".

sustain_track

bool If TRUE energy consumption is tracked during training via the python library codecarbon.

sustain_iso_code

string ISO code (Alpha-3-Code) for the country. This variable must be set if sustainability should be tracked. A list can be found on Wikipedia: https://en.wikipedia.org/wiki/List_of_ISO_3166_country_codes.

sustain_region

string Region within a country. Only available for USA and Canada. See the documentation of codecarbon for more information https://mlco2.github.io/codecarbon/parameters.html.

sustain_interval

integer Interval in seconds for measuring power usage.

trace

bool TRUE if information about the progress should be printed to the console.

keras_trace

int

  • keras_trace = 0: does not print any information about the training process from keras on the console.

  • keras_trace = 1: prints a progress bar.

  • keras_trace = 2: prints one line of information for every epoch. Only relevant if ml_framework = "tensorflow".

pytorch_trace

int

  • pytorch_trace = 0: does not print any information about the training process from pytorch on the console.

  • pytorch_trace = 1: prints a progress bar.

pytorch_safetensors

bool Only relevant for pytorch models.

  • TRUE: a 'pytorch' model is saved in safetensors format.

  • FALSE (or 'safetensors' is not available): model is saved in the standard pytorch format (.bin).

log_dir

Path to the directory where the log files should be saved.

log_write_interval

int Time in seconds determining the interval in which the logger should try to update the log files. Only relevant if log_dir is not NULL.

Returns

This method does not return an object. Instead the trained or fine-tuned model is saved to disk.


Method clone()

The objects of this class are cloneable with this method.

Usage

.AIFERobertaTransformer$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.