Function for creating a new transformer based on RoBERTa
Source:R/transformer_roberta.R
create_roberta_model.Rd
This function creates a transformer configuration based on the RoBERTa base architecture and a vocabulary based on Byte-Pair Encoding (BPE) tokenizer by using the python libraries 'transformers' and 'tokenizers'.
Usage
create_roberta_model(
ml_framework = aifeducation_config$get_framework(),
model_dir,
vocab_raw_texts = NULL,
vocab_size = 30522,
add_prefix_space = FALSE,
trim_offsets = TRUE,
max_position_embeddings = 512,
hidden_size = 768,
num_hidden_layer = 12,
num_attention_heads = 12,
intermediate_size = 3072,
hidden_act = "gelu",
hidden_dropout_prob = 0.1,
attention_probs_dropout_prob = 0.1,
sustain_track = TRUE,
sustain_iso_code = NULL,
sustain_region = NULL,
sustain_interval = 15,
trace = TRUE,
pytorch_safetensors = TRUE
)
Arguments
- ml_framework
string
Framework to use for training and inference.ml_framework="tensorflow"
for 'tensorflow' andml_framework="pytorch"
for 'pytorch'.- model_dir
string
Path to the directory where the model should be saved.- vocab_raw_texts
vector
containing the raw texts for creating the vocabulary.- vocab_size
int
Size of the vocabulary.- add_prefix_space
bool
TRUE
if an additional space should be insert to the leading words.- trim_offsets
bool
IfTRUE
post processing trims offsets to avoid including whitespaces.- max_position_embeddings
int
Number of maximal position embeddings. This parameter also determines the maximum length of a sequence which can be processed with the model.- hidden_size
int
Number of neurons in each layer. This parameter determines the dimensionality of the resulting text embedding.- num_hidden_layer
int
Number of hidden layers.- num_attention_heads
int
Number of attention heads.- intermediate_size
int
Number of neurons in the intermediate layer of the attention mechanism.- hidden_act
string
name of the activation function.- hidden_dropout_prob
double
Ratio of dropout.- attention_probs_dropout_prob
double
Ratio of dropout for attention probabilities.- sustain_track
bool
IfTRUE
energy consumption is tracked during training via the python library codecarbon.- sustain_iso_code
string
ISO code (Alpha-3-Code) for the country. This variable must be set if sustainability should be tracked. A list can be found on Wikipedia: https://en.wikipedia.org/wiki/List_of_ISO_3166_country_codes.- sustain_region
Region within a country. Only available for USA and Canada See the documentation of codecarbon for more information. https://mlco2.github.io/codecarbon/parameters.html
- sustain_interval
integer
Interval in seconds for measuring power usage.- trace
bool
TRUE
if information about the progress should be printed to the console.- pytorch_safetensors
bool
IfTRUE
a 'pytorch' model is saved in safetensors format. IfFALSE
or 'safetensors' not available it is saved in the standard pytorch format (.bin). Only relevant for pytorch models.
Value
This function does not return an object. Instead the configuration and the vocabulary of the new model are saved on disk.
Note
To train the model, pass the directory of the model to the function train_tune_roberta_model.
References
Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., & Stoyanov, V. (2019). RoBERTa: A Robustly Optimized BERT Pretraining Approach. doi:10.48550/arXiv.1907.11692
Hugging Face Documentation https://huggingface.co/docs/transformers/model_doc/roberta#transformers.RobertaConfig
See also
Other Transformer:
create_bert_model()
,
create_deberta_v2_model()
,
create_funnel_model()
,
create_longformer_model()
,
train_tune_bert_model()
,
train_tune_deberta_v2_model()
,
train_tune_funnel_model()
,
train_tune_longformer_model()
,
train_tune_roberta_model()