PyTorch BERT model.
actableai.third_parties.spanABSA.bert.modeling.BERTAttention(config)¶Bases: torch.nn.modules.module.Module
forward(input_tensor, attention_mask)¶Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within
this function, one should call the Module instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
training: bool¶actableai.third_parties.spanABSA.bert.modeling.BERTEmbeddings(config)¶Bases: torch.nn.modules.module.Module
forward(input_ids, token_type_ids=None)¶Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within
this function, one should call the Module instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
training: bool¶actableai.third_parties.spanABSA.bert.modeling.BERTEncoder(config)¶Bases: torch.nn.modules.module.Module
forward(hidden_states, attention_mask)¶Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within
this function, one should call the Module instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
training: bool¶actableai.third_parties.spanABSA.bert.modeling.BERTIntermediate(config)¶Bases: torch.nn.modules.module.Module
forward(hidden_states)¶Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within
this function, one should call the Module instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
training: bool¶actableai.third_parties.spanABSA.bert.modeling.BERTLayer(config)¶Bases: torch.nn.modules.module.Module
forward(hidden_states, attention_mask)¶Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within
this function, one should call the Module instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
training: bool¶actableai.third_parties.spanABSA.bert.modeling.BERTLayerNorm(config, variance_epsilon=1e-12)¶Bases: torch.nn.modules.module.Module
forward(x)¶Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within
this function, one should call the Module instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
training: bool¶actableai.third_parties.spanABSA.bert.modeling.BERTOutput(config)¶Bases: torch.nn.modules.module.Module
forward(hidden_states, input_tensor)¶Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within
this function, one should call the Module instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
training: bool¶actableai.third_parties.spanABSA.bert.modeling.BERTPooler(config)¶Bases: torch.nn.modules.module.Module
forward(hidden_states)¶Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within
this function, one should call the Module instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
training: bool¶actableai.third_parties.spanABSA.bert.modeling.BERTSelfAttention(config)¶Bases: torch.nn.modules.module.Module
forward(hidden_states, attention_mask)¶Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within
this function, one should call the Module instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
training: bool¶transpose_for_scores(x)¶actableai.third_parties.spanABSA.bert.modeling.BERTSelfOutput(config)¶Bases: torch.nn.modules.module.Module
forward(hidden_states, input_tensor)¶Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within
this function, one should call the Module instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
training: bool¶actableai.third_parties.spanABSA.bert.modeling.BertConfig(vocab_size, hidden_size=768, num_hidden_layers=12, num_attention_heads=12, intermediate_size=3072, hidden_act='gelu', hidden_dropout_prob=0.1, attention_probs_dropout_prob=0.1, max_position_embeddings=512, type_vocab_size=16, initializer_range=0.02)¶Bases: object
Configuration class to store the configuration of a BertModel.
from_dict(json_object)¶Constructs a BertConfig from a Python dictionary of parameters.
from_json_file(json_file)¶Constructs a BertConfig from a json file of parameters.
to_dict()¶Serializes this instance to a Python dictionary.
to_json_string()¶Serializes this instance to a JSON string.
actableai.third_parties.spanABSA.bert.modeling.BertForQuestionAnswering(config)¶Bases: torch.nn.modules.module.Module
BERT model for Question Answering (span extraction). This module is composed of the BERT model with a linear layer on top of the sequence output that computes start_logits and end_logits
Example usage: ```python # Already been converted into WordPiece token ids input_ids = torch.LongTensor([[31, 51, 99], [15, 5, 0]]) input_mask = torch.LongTensor([[1, 1, 1], [1, 1, 0]]) token_type_ids = torch.LongTensor([[0, 0, 1], [0, 2, 0]])
model = BertForQuestionAnswering(config) start_logits, end_logits = model(input_ids, token_type_ids, input_mask) ```
forward(input_ids, token_type_ids, attention_mask, start_positions=None, end_positions=None)¶Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within
this function, one should call the Module instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
training: bool¶actableai.third_parties.spanABSA.bert.modeling.BertForSequenceClassification(config, num_labels)¶Bases: torch.nn.modules.module.Module
BERT model for classification. This module is composed of the BERT model with a linear layer on top of the pooled output.
Example usage: ```python # Already been converted into WordPiece token ids input_ids = torch.LongTensor([[31, 51, 99], [15, 5, 0]]) input_mask = torch.LongTensor([[1, 1, 1], [1, 1, 0]]) token_type_ids = torch.LongTensor([[0, 0, 1], [0, 2, 0]])
num_labels = 2
model = BertForSequenceClassification(config, num_labels) logits = model(input_ids, token_type_ids, input_mask) ```
forward(input_ids, token_type_ids, attention_mask, labels=None)¶Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within
this function, one should call the Module instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
training: bool¶actableai.third_parties.spanABSA.bert.modeling.BertModel(config: actableai.third_parties.spanABSA.bert.modeling.BertConfig)¶Bases: torch.nn.modules.module.Module
BERT model (“Bidirectional Embedding Representations from a Transformer”).
Example usage: ```python # Already been converted into WordPiece token ids input_ids = torch.LongTensor([[31, 51, 99], [15, 5, 0]]) input_mask = torch.LongTensor([[1, 1, 1], [1, 1, 0]]) token_type_ids = torch.LongTensor([[0, 0, 1], [0, 2, 0]])
model = modeling.BertModel(config=config) all_encoder_layers, pooled_output = model(input_ids, token_type_ids, input_mask) ```
forward(input_ids, token_type_ids=None, attention_mask=None)¶Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within
this function, one should call the Module instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
training: bool¶actableai.third_parties.spanABSA.bert.modeling.gelu(x)¶Implementation of the gelu activation function. For information: OpenAI GPT’s gelu is slightly different (and gives slightly different results): 0.5 * x * (1 + torch.tanh(math.sqrt(2 / math.pi) * (x + 0.044715 * torch.pow(x, 3))))
PyTorch optimization for BERT model.
actableai.third_parties.spanABSA.bert.optimization.BERTAdam(params, lr, warmup=- 1, t_total=- 1, schedule='warmup_linear', b1=0.9, b2=0.999, e=1e-06, weight_decay_rate=0.01, max_grad_norm=1.0)¶Bases: torch.optim.optimizer.Optimizer
Implements BERT version of Adam algorithm with weight decay fix (and no ). Params:
lr: learning rate warmup: portion of t_total for the warmup, -1 means no warmup. Default: -1 t_total: total number of training steps for the learning
rate schedule, -1 means constant learning rate. Default: -1schedule: schedule to use for the warmup (see above). Default: ‘warmup_linear’ b1: Adams b1. Default: 0.9 b2: Adams b2. Default: 0.999 e: Adams epsilon. Default: 1e-6 weight_decay_rate: Weight decay. Default: 0.01 max_grad_norm: Maximum norm for the gradients (-1 means no clipping). Default: 1.0
get_lr()¶step(closure=None)¶Performs a single optimization step.
actableai.third_parties.spanABSA.bert.optimization.warmup_constant(x, warmup=0.002)¶actableai.third_parties.spanABSA.bert.optimization.warmup_cosine(x, warmup=0.002)¶actableai.third_parties.spanABSA.bert.optimization.warmup_linear(x, warmup=0.002)¶actableai.third_parties.spanABSA.bert.sentiment_modeling.BertForBIOAspectClassification(config, use_crf=False)¶Bases: torch.nn.modules.module.Module
forward(input_ids, token_type_ids, attention_mask, polarity_positions=None, device=None)¶Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within
this function, one should call the Module instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
training: bool¶actableai.third_parties.spanABSA.bert.sentiment_modeling.BertForBIOAspectExtraction(config, use_crf=False)¶Bases: torch.nn.modules.module.Module
forward(input_ids, token_type_ids, attention_mask, bio_labels=None, device=None)¶Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within
this function, one should call the Module instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
training: bool¶actableai.third_parties.spanABSA.bert.sentiment_modeling.BertForCollapsedBIOAspectExtractionAndClassification(config, use_crf=False)¶Bases: torch.nn.modules.module.Module
forward(input_ids, token_type_ids, attention_mask, bio_labels=None, device=None)¶Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within
this function, one should call the Module instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
training: bool¶actableai.third_parties.spanABSA.bert.sentiment_modeling.BertForCollapsedSpanAspectExtractionAndClassification(config)¶Bases: torch.nn.modules.module.Module
forward(input_ids, token_type_ids, attention_mask, neu_start_positions=None, neu_end_positions=None, pos_start_positions=None, pos_end_positions=None, neg_start_positions=None, neg_end_positions=None, neu_mask=None, pos_mask=None, neg_mask=None)¶Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within
this function, one should call the Module instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
training: bool¶actableai.third_parties.spanABSA.bert.sentiment_modeling.BertForJointBIOExtractAndClassification(config, use_crf=False)¶Bases: torch.nn.modules.module.Module
forward(mode, attention_mask, input_ids=None, token_type_ids=None, bio_labels=None, polarity_positions=None, sequence_input=None, device=None)¶Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within
this function, one should call the Module instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
training: bool¶actableai.third_parties.spanABSA.bert.sentiment_modeling.BertForJointSpanExtractAndClassification(config)¶Bases: torch.nn.modules.module.Module
forward(mode, attention_mask, input_ids=None, token_type_ids=None, start_positions=None, end_positions=None, span_starts=None, span_ends=None, polarity_labels=None, label_masks=None, sequence_input=None)¶Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within
this function, one should call the Module instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
training: bool¶actableai.third_parties.spanABSA.bert.sentiment_modeling.BertForSpanAspectClassification(config)¶Bases: torch.nn.modules.module.Module
forward(mode, attention_mask, input_ids=None, token_type_ids=None, span_starts=None, span_ends=None, labels=None, label_masks=None)¶training: bool¶actableai.third_parties.spanABSA.bert.sentiment_modeling.BertForSpanAspectExtraction(config)¶Bases: torch.nn.modules.module.Module
BERT model for Question Answering (span extraction). This module is composed of the BERT model with a linear layer on top of the sequence output that computes start_logits and end_logits
Example usage: ```python # Already been converted into WordPiece token ids input_ids = torch.LongTensor([[31, 51, 99], [15, 5, 0]]) input_mask = torch.LongTensor([[1, 1, 1], [1, 1, 0]]) token_type_ids = torch.LongTensor([[0, 0, 1], [0, 2, 0]])
model = BertForQuestionAnswering(config) start_logits, end_logits = model(input_ids, token_type_ids, input_mask) ```
forward(input_ids, token_type_ids, attention_mask, start_positions=None, end_positions=None)¶Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within
this function, one should call the Module instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
training: bool¶actableai.third_parties.spanABSA.bert.sentiment_modeling.convert_crf_output(outputs, sequence_length, device)¶actableai.third_parties.spanABSA.bert.sentiment_modeling.distant_cross_entropy(logits, positions, mask=None)¶actableai.third_parties.spanABSA.bert.sentiment_modeling.distant_loss(start_logits, end_logits, start_positions=None, end_positions=None, mask=None)¶actableai.third_parties.spanABSA.bert.sentiment_modeling.flatten(x)¶actableai.third_parties.spanABSA.bert.sentiment_modeling.flatten_emb_by_sentence(emb, emb_mask)¶actableai.third_parties.spanABSA.bert.sentiment_modeling.get_self_att_representation(input, input_score, input_mask)¶[N, D]
actableai.third_parties.spanABSA.bert.sentiment_modeling.get_span_representation(span_starts, span_ends, input, input_mask)¶[N*M, JR, D], [N*M, JR]
actableai.third_parties.spanABSA.bert.sentiment_modeling.pad_sequence(sequence, length)¶actableai.third_parties.spanABSA.bert.sentiment_modeling.reconstruct(x, ref)¶Tokenization classes.
actableai.third_parties.spanABSA.bert.tokenization.BasicTokenizer(do_lower_case=True)¶Bases: object
Runs basic tokenization (punctuation splitting, lower casing, etc.).
tokenize(text)¶Tokenizes a piece of text.
actableai.third_parties.spanABSA.bert.tokenization.FullTokenizer(vocab_file, do_lower_case=True)¶Bases: object
Runs end-to-end tokenziation.
convert_tokens_to_ids(tokens)¶tokenize(text)¶actableai.third_parties.spanABSA.bert.tokenization.WordpieceTokenizer(vocab, unk_token='[UNK]', max_input_chars_per_word=100)¶Bases: object
Runs WordPiece tokenization.
tokenize(text)¶Tokenizes a piece of text into its word pieces.
This uses a greedy longest-match-first algorithm to perform tokenization using the given vocabulary.
actableai.third_parties.spanABSA.bert.tokenization.convert_to_unicode(text)¶Converts text to Unicode (if it’s not already), assuming utf-8 input.
actableai.third_parties.spanABSA.bert.tokenization.convert_tokens_to_ids(vocab, tokens)¶Converts a sequence of tokens into ids using the vocab.
actableai.third_parties.spanABSA.bert.tokenization.load_vocab(vocab_file)¶Loads a vocabulary file into a dictionary.
actableai.third_parties.spanABSA.bert.tokenization.printable_text(text)¶Returns text encoded in a way suitable for print or tf.logging.
actableai.third_parties.spanABSA.bert.tokenization.whitespace_tokenize(text)¶Runs basic whitespace cleaning and splitting on a peice of text.