Transformer: Linking Atom Mapping and Neural Machine Translation

03 November 2020, Version 1
This content is a preprint and has not undergone peer review at the time of posting.

Abstract


Atom mapping reveals the corresponding relationship between reactant and product atoms in chemical reactions, which is important for drug design, exploration for underlying chemical mechanism, reaction classification and so on. Here, we present a new method that links atom mapping and neural machine translation using the transformer model. In contrast to the previous algorithms, our method runs reaction prediction and captures the information of corresponding atoms in parallel. Meanwhile, we use a set of approximately 360K reactions without atom mapping information for obtaining general chemical knowledge and transfer it to atom mapping task on another dataset which contains 50K atom-mapped reactions. With manual evaluation, the top-1 accuracy of the transformer model in atom mapping reaches 91.4%. we hope our work can provide an important step toward solving the challenge problem of atom mapping in a linguistic perspective.

Keywords

Artificial Intelligence
Machine Learning
Transformer
Reaction Prediction
Atom Mapping

Supplementary materials

Title
Description
Actions
Title
Supporting information
Description
Actions

Comments

Comments are not moderated before they are posted, but they can be removed by the site moderators if they are found to be in contravention of our Commenting Policy [opens in a new tab] - please read this policy before you post. Comments should be used for scholarly discussion of the content in question. You can find more information about how to use the commenting feature here [opens in a new tab] .
This site is protected by reCAPTCHA and the Google Privacy Policy [opens in a new tab] and Terms of Service [opens in a new tab] apply.