
Child R6
class for creation and training of Funnel
transformers
Source: R/dotAIFEFunnelTransformer.R
dot-AIFEFunnelTransformer.Rd
This class has the following methods:
create
: creates a new transformer based onFunnel
.train
: trains and fine-tunes aFunnel
model.
Note
The model uses a configuration with truncate_seq = TRUE
to avoid implementation problems with tensorflow.
This model uses a WordPiece
tokenizer like BERT
and can be trained with whole word masking. The transformer
library may display a warning, which can be ignored.
Create
New models can be created using the .AIFEFunnelTransformer$create
method.
Model is created with separete_cls = TRUE
, truncate_seq = TRUE
, and pool_q_only = TRUE
.
Train
To train the model, pass the directory of the model to the method .AIFEFunnelTransformer$train
.
Pre-Trained models which can be fine-tuned with this function are available at https://huggingface.co/.
Training of the model makes use of dynamic masking.
References
Dai, Z., Lai, G., Yang, Y. & Le, Q. V. (2020). Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing. doi:10.48550/arXiv.2006.03236
Hugging Face documentation
See also
Other R6 classes for transformers:
.AIFEBaseTransformer
,
.AIFEBertTransformer
,
.AIFELongformerTransformer
,
.AIFEMpnetTransformer
,
.AIFERobertaTransformer
Super class
aifeducation::.AIFEBaseTransformer
-> .AIFEFunnelTransformer
Methods
Inherited methods
aifeducation::.AIFEBaseTransformer$init_transformer()
aifeducation::.AIFEBaseTransformer$set_SFC_calculate_vocab()
aifeducation::.AIFEBaseTransformer$set_SFC_check_max_pos_emb()
aifeducation::.AIFEBaseTransformer$set_SFC_create_final_tokenizer()
aifeducation::.AIFEBaseTransformer$set_SFC_create_tokenizer_draft()
aifeducation::.AIFEBaseTransformer$set_SFC_create_transformer_model()
aifeducation::.AIFEBaseTransformer$set_SFC_save_tokenizer_draft()
aifeducation::.AIFEBaseTransformer$set_SFT_create_data_collator()
aifeducation::.AIFEBaseTransformer$set_SFT_cuda_empty_cache()
aifeducation::.AIFEBaseTransformer$set_SFT_load_existing_model()
aifeducation::.AIFEBaseTransformer$set_model_param()
aifeducation::.AIFEBaseTransformer$set_model_temp()
aifeducation::.AIFEBaseTransformer$set_required_SFC()
aifeducation::.AIFEBaseTransformer$set_title()
Method new()
Creates a new transformer based on Funnel
and sets the title.
Usage
.AIFEFunnelTransformer$new(init_trace = TRUE)
Method create()
This method creates a transformer configuration based on the Funnel
transformer base architecture
and a vocabulary based on WordPiece
using the python transformers
and tokenizers
libraries.
This method adds the following 'dependent' parameters to the base class's inherited params
list:
vocab_do_lower_case
target_hidden_size
block_sizes
num_decoder_layers
pooling_type
activation_dropout
Usage
.AIFEFunnelTransformer$create(
model_dir,
text_dataset,
vocab_size = 30522,
vocab_do_lower_case = FALSE,
max_position_embeddings = 512,
hidden_size = 768,
target_hidden_size = 64,
block_sizes = c(4, 4, 4),
num_attention_heads = 12,
intermediate_size = 3072,
num_decoder_layers = 2,
pooling_type = "Mean",
hidden_act = "GELU",
hidden_dropout_prob = 0.1,
attention_probs_dropout_prob = 0.1,
activation_dropout = 0,
sustain_track = TRUE,
sustain_iso_code = NULL,
sustain_region = NULL,
sustain_interval = 15,
trace = TRUE,
pytorch_safetensors = TRUE,
log_dir = NULL,
log_write_interval = 2
)
Arguments
model_dir
string
Path to the directory where the model should be saved. Allowed values: anytext_dataset
LargeDataSetForText
LargeDataSetForText Object storing textual data.vocab_size
int
Size of the vocabulary. Allowed values:1000 <= x <= 5e+05
vocab_do_lower_case
bool
TRUE
if all words/tokens should be lower case.max_position_embeddings
int
Number of maximum position embeddings. This parameter also determines the maximum length of a sequence which can be processed with the model. Allowed values:10 <= x <= 4048
hidden_size
int
Number of neurons in each layer. This parameter determines the dimensionality of the resulting text embedding. Allowed values:1 <= x <= 2048
target_hidden_size
int
Number of neurons in the final layer. This parameter determines the dimensionality of the resulting text embedding. Allowed values:1 <= x
block_sizes
vector
vector
ofint
determining the number and sizes of each block.num_attention_heads
int
determining the number of attention heads for a self-attention layer. Only relevant ifattention_type='multihead'
Allowed values:0 <= x
intermediate_size
int
determining the size of the projection layer within a each transformer encoder. Allowed values:1 <= x
num_decoder_layers
int
Number of decoding layers. Allowed values:1 <= x
pooling_type
string
Type of pooling."mean"
for pooling with mean."max"
for pooling with maximum values. Allowed values: 'Mean', 'Max'
hidden_act
string
Name of the activation function. Allowed values: 'gelu', 'relu', 'silu', 'gelu_new'hidden_dropout_prob
double
Ratio of dropout. Allowed values:0 <= x <= 0.6
attention_probs_dropout_prob
double
Ratio of dropout for attention probabilities. Allowed values:0 <= x <= 0.6
activation_dropout
double
Dropout probability between the layers of the feed-forward blocks. Allowed values:0 <= x <= 0.6
sustain_track
bool
IfTRUE
energy consumption is tracked during training via the python library 'codecarbon'.sustain_iso_code
string
ISO code (Alpha-3-Code) for the country. This variable must be set if sustainability should be tracked. A list can be found on Wikipedia: https://en.wikipedia.org/wiki/List_of_ISO_3166_country_codes. Allowed values: anysustain_region
string
Region within a country. Only available for USA and Canada See the documentation of codecarbon for more information. https://mlco2.github.io/codecarbon/parameters.html Allowed values: anysustain_interval
int
Interval in seconds for measuring power usage. Allowed values:1 <= x
trace
bool
TRUE
if information about the estimation phase should be printed to the console.pytorch_safetensors
bool
TRUE
: a 'pytorch' model is saved in safetensors format.FALSE
(or 'safetensors' is not available): model is saved in the standard pytorch format (.bin).
log_dir
string
Path to the directory where the log files should be saved. If no logging is desired set this argument toNULL
. Allowed values: anylog_write_interval
int
Time in seconds determining the interval in which the logger should try to update the log files. Only relevant iflog_dir
is notNULL
. Allowed values:1 <= x
Method train()
This method can be used to train or fine-tune a transformer based on Funnel
Transformer
architecture with the help of the python libraries transformers
, datasets
, and tokenizers
.
Usage
.AIFEFunnelTransformer$train(
output_dir,
model_dir_path,
text_dataset,
p_mask = 0.15,
whole_word = TRUE,
val_size = 0.1,
n_epoch = 1,
batch_size = 12,
chunk_size = 250,
full_sequences_only = FALSE,
min_seq_len = 50,
learning_rate = 0.003,
sustain_track = TRUE,
sustain_iso_code = NULL,
sustain_region = NULL,
sustain_interval = 15,
trace = TRUE,
pytorch_trace = 1,
pytorch_safetensors = TRUE,
log_dir = NULL,
log_write_interval = 2
)
Arguments
output_dir
string
Path to the directory where the model should be saved. Allowed values: anymodel_dir_path
string
Path to the directory where the original model is stored. Allowed values: anytext_dataset
LargeDataSetForText
LargeDataSetForText Object storing textual data.p_mask
double
Ratio that determines the number of words/tokens used for masking. Allowed values:0 < x < 1
whole_word
bool
*TRUE
: whole word masking should be applied.FALSE
: token masking is used.
val_size
double
between 0 and 1, indicating the proportion of cases which should be used for the validation sample during the estimation of the model. The remaining cases are part of the training data. Allowed values:0 < x < 1
n_epoch
int
Number of training epochs. Allowed values:1 <= x
batch_size
int
Size of the batches for training. Allowed values:1 <= x
chunk_size
int
Maximum length of every sequence. Must be equal or less the global maximum size allowed by the model. Allowed values:100 <= x
full_sequences_only
bool
TRUE
for using only chunks with a sequence length equal tochunk_size
.min_seq_len
int
Only relevant iffull_sequences_only = FALSE
. Value determines the minimal sequence length included in training process. Allowed values:10 <= x
learning_rate
double
Initial learning rate for the training. Allowed values:0 < x <= 1
sustain_track
bool
IfTRUE
energy consumption is tracked during training via the python library 'codecarbon'.sustain_iso_code
string
ISO code (Alpha-3-Code) for the country. This variable must be set if sustainability should be tracked. A list can be found on Wikipedia: https://en.wikipedia.org/wiki/List_of_ISO_3166_country_codes. Allowed values: anysustain_region
string
Region within a country. Only available for USA and Canada See the documentation of codecarbon for more information. https://mlco2.github.io/codecarbon/parameters.html Allowed values: anysustain_interval
int
Interval in seconds for measuring power usage. Allowed values:1 <= x
trace
bool
TRUE
if information about the estimation phase should be printed to the console.pytorch_trace
int
ml_trace=0
does not print any information about the training process from pytorch on the console. Allowed values:0 <= x <= 1
pytorch_safetensors
bool
TRUE
: a 'pytorch' model is saved in safetensors format.FALSE
(or 'safetensors' is not available): model is saved in the standard pytorch format (.bin).
log_dir
string
Path to the directory where the log files should be saved. If no logging is desired set this argument toNULL
. Allowed values: anylog_write_interval
int
Time in seconds determining the interval in which the logger should try to update the log files. Only relevant iflog_dir
is notNULL
. Allowed values:1 <= x