Abstract:
The temporal knowledge graph reasoning is a technical foundation for improving the future situation of intelligent decisionmaking efficiency. Traditional reasoning models face such problems as large scale of model parameters and high computing hardware requirements ,etc. and it is difficult to meet the real-time reasoning and decision-making requirements of low performance and low-power distributed equipment. Traditional model compression methods ignore the timing characteristics. A distillation method applied to the temporal knowledge graph reasoning model is proposed. The distillation framework is constructed based on large language models, Massive public knowledge and specific temporal knowledge are integrated, and assist lightweight model training is assisted. The experiments carried out on the open datasets indicate that the method is better than similarly international methods.