跳至正文

Rule-Enhanced Embedding Learning

The basic idea of rule-enhanced representations is to learn the entity and relation embeddings on not only the original observed triplets in KGs but also the triples or ground rules inferred by some pre-defined rules, which is also called symbolic-driven neural reasoning.

Type of Rules

The rules which are usually used are mostly follow the following form:

(x, r, x)

(x, r, y) \( \rightarrow \) (y, r, x)

(x, r, y) \( \wedge \) (y, r, z) \( \rightarrow \) (x, r, z)

(x, r1, y) \( \rightarrow \) (y, r2, x)

(x, r1, y) \( \rightarrow \) (x, r2, y)

(x, r1, y) \( \wedge \) (y, r2, z) \( \rightarrow \) (x, r3, z)

where x, y are the entities and r is relation.

Methodologies

There are two main methodologies. The first kind infer rules one time in the beginning and keep the rules invariant during the learning process. The rules will impact embedding learning.
The second also infers new rules based on the updated embeddings at each iteration, which infers new rules and derives new triplets iteratively.

Classic Methods

For example, KALE deals with two types of rules:

(x, r1, y) \( \rightarrow \) (x, r2, y)

(x, r1, y) \( \wedge \) (y, r2, z) \( \rightarrow \) (x, r3, z)

KALE finds all the ground rules of the above two types of rules, assign a score to each ground rule indicating how likely a ground rule is satisfied, and finally learn the entity and relation embeddings on the training set of the original triplets and the ground rules.

Based on KALE, RUGE changes the one-round injection of rules into an iterative manner. Instead of directly treating a ground rule as the positive instance by KALE, RUGE injects the triplets derived by some rules as the unlabeled triplets to update the entity/relation embeddings.
Since the unlabeled triplets are not necessarily true, RUGE predict a probability for each unlabeled triplet based on the current embeddings. Then the embeddings are updated based on both the labeled and unlabeled triplets.

IterE also infers new rules based on the updated embeddings at each iteration, it specifically infers new rules and derives new triplets from the rules based on the entity and relation embeddings, and then updates these embeddings based on the extended triplet set. The two processes are executed iteratively. Thus, the rules will impact embedding learning, and the embeddings will benefit the inference of rules.

[1] Jointly Embedding Knowledge Graphs and Logical Rules.

[2] Knowledge graph embedding with iterative guidance from soft rules

[3] [Iteratively Learning Embeddings and Rules for Knowledge Graph Reasoning](

发表回复

您的电子邮箱地址不会被公开。 必填项已用*标注