KGLM 2019
Traditional language models
- only capable of remembering facts seen at training time, and often have differently recalling them.
- unable to generate factually correct sentences, do not generalize to rare/unseen entities, and often omit rare tokens from the vocabulary (instead generating UNKNOWN tokens)
- existing models represent the distribution over the entire vocabulary directly, whether they are common words, references to real world entities, or factual information like dates and numbers.
To address this, we introduce the knowledge graph language model (KGLM),
- a neural language model with mechanisms for selecting and copying facts from a knowledge graph that are relevant to the context.
- is conditioned on an external, structured knowledge source, which it uses to generate factual text.
- These mechanisms enable the model to render information it has never seen before, as well as generate out-of-vocabulary tokens
- KGLM maintains a dynamically growing local knowledge graph, a subset of the knowledge graph that contains entities that have already been mentioned in the text, and their related entities.
- When generating entity tokens, the model either decides to render a new entity that is absent from the local graph, thereby growing the local knowledge graph, or to render a fact from the local graph.
- When rendering, the model combines the standard vocabulary with tokens available in the knowledge graph, thus supporting numbers, dates, and other rare tokens.
Problem Setup and Notation
- A language model
- defines a probability distribution over each other within a sequence, conditioned on the sequence of tokens observed so far.
- We denote the random variable representing the next token as and the sequence of the tokens before as . i.e. language models compute .
- We use LSTMs as the recurrent module in this paper.
- A knowledge graph
- is a directed, labeled graph consisting of entities as nodes, with edges defined over a set of relations , i.e. , where p is a parent entity with relation to another entity .
Generative KG Language Model
- To encourage the model to generate facts that have appeared in the context already, KGLM will maintain a local knowledge graph containing all facts involving entities that have appeared in the context.
- As the model decides to refer to entities that have not been referred to yet, it will grow the local knowledge graph with additional entities and facts to reflect the new entity.
- Formally, we will compute where is the sequence of observed tokens, is the set of entities mentioned in , and is the local knowledge graph determined by .
- If = new then choose the upcoming entity from the set of all entities
- If = related then:
- Choose a parent entity from
- Choose a factual relation to render,
- Choose as one of the tail entities,
- If then
- Generate conditioned on , potentially copying one of ’s aliases.
- If , then , else
For the model to refer to an entity it has already mentioned, we introduce a Reflexive relation that self-relates, i.e. for (, Reflexive, ).
Parameterizing the Distributions
EMAT 2022
- The main architectural innovation is to use an external knowledgebase, based on RAG, and combine this seamlessly with a memory mechanism to improve the model’s predictive performance. The main structure of this model is to use a question and document encoder, both transformer, to learn and look up passages of text from a knowledgebase. (based on DPR) and then fuse this knowledge into a transformer encoder/decoder model such as BERT and T5.
- The model retrieves passages, performs a lookup in both the KB and memory and then reranks them together using the dot product score between the question and document encoder vectors. (A significant benefit is that it naturally integrates both a short-term and long-term KB retrieval mechanism with a relatively simple design while allowing a powerful pre-trained LM and retrieval system from RAG to be trained.)