Can We Mitigate Transformers Flaws?
How to overcome transformers’ flaw of long-document inputs
Transformers are a game-changer in NLP. But they do have a flaw: long-document inputs like you find in legal or medical documents. And while there are a few mitigation tactics such as truncating or chunking, they’re not perfect.
In his Towards Data Science article, Overcoming Input Length Constraints of Transformers, Slimmer AI machine learning engineer Thomas van Dongen takes a closer look at a possible solution to the problem: summarisation.
He conducts an experiment to evaluate the performance of having extractive summaries as inputs for training a Transformer-based model.
To know whether summarisation can overcome the flaw of transformers, read his full article.
Follow us on LinkedIn and Twitter for more stories like this.