Slimmer AI
October 21, 2021
4 min read

Don’t Be an (AI) Fence Thrower

Simple tips to move from AI as an academic discipline to building applied AI

At a recent conference for Deloitte’s Netherlands AI practice, Dr. Stephan Tulkens and Thomas van Dongen, two of Slimmer AI’s machine learning engineers, spoke to the group about moving from academic AI to applied AI, including a failed and successful example of applied AI in the real-world.

Moving from academic AI to real-world applications

AI has only recently morphed from being solely an academic discipline into something that can be and is more readily applied in the real world​. Of course there are some exceptions to this when we look at companies like Google and Amazon, but largely, applied AI is a new frontier for companies.

When studying AI, practitioners learn about what a good AI system looks like. But a good AI system is defined in an academic sense, not a real-world sense. Since the reward system in academia is still largely focused around publishing research, this leads to the following properties being valued in academic AI papers:

  • High scores on benchmarks​
  • New models​
  • New algorithms ​
  • New data/tasks​

These properties which are valued by academic AI play a much smaller role in applied AI, leading to undesirable outcomes for both the ML engineer/researcher and the customer/end user:​

From the researcher POV this results in job mismatches/unrealistic expectations​: “I just want to build cool stuff!”​

From the customer POV it results in disappointed customers​: “Why did you build this? It doesn’t solve my problem.”​

Two tips to avoid applied AI mismatches


Don’t be a fence thrower. Fence throwing (noun): The act of training an AI model and just giving it to the customer. Ex: “Alice trained a transformer and threw it over the fence.” Transposing the values of academic AI to applied AI leads to fence throwing.

  • Novelty: doesn’t matter ​
  • Performance: doesn’t matter (as much as you think)​

So what does matter?​

  • Improving outcomes: does the model solve the problem?​
  • Requirements: is it easy (enough) to use? Is it scalable/robust?​


Keep it simple. The easiest way to keep it simple is to be more in love with solving the problem than the solution itself. This goes for ML engineers, product owners, and entrepreneurs. Your customers and users want their problems solved, they won’t appreciate perfect or complex code, or a complicated user experience. As the problem-solver, you need to become obsessed with customer problems and solving them.

A case study and a warning

Covid-19 attracted the efforts of many researchers and AI labs​. Labs were already working on medical imaging​, hospitals and universities have collaborations set up to tackle similar problems, and there was (and still is) high motivation to “fix” the pandemic​. In other words: this was the perfect case to finally show that AI works​.​

Unfortunately, a 2020 paper published in Nature showed that, of 415 checked models that could detect COVID-19, 0 were useful right now​. Many of them were actively used in hospitals, and thus might have had a negative effect​ on COVID-19 treatment. The reasons for the failure of these models are diverse​:

  • The models were fit on small datasets from different hospitals​ e.g., sometimes a model just learned about font sizes​.
  • The models were not validated in the hospital itself, but just performed well on a test dataset​.

In other words, these models were not overfit, nor were there bugs. The authors followed all rules for publishing a successful AI paper. But the models didn’t work in the real world.

This is a great exhibit of fence-throwing behaviour; an incorrect application of the principles governing academic AI to applied AI​. It assumes that you don’t need to look at the real world to successfully model a task​. In this case(s), the authors are in love with the model, not with the problem​.

A non-fence throwing applied AI success story

One of our customers detected that a small number of submitted research articles were being ‘faked’ — in part by text generation models (see: To help them act quickly, we took an off-the-shelf transformer model that was trained on the Open-AI GPT-2 detector corpus and modified the code to be able to work with longer documents​.

Upgrading and refitting an existing model allowed us to produce some working hypotheses within 2 days​. Our solution is now being integrated into their workflow and will be able to scan thousands of papers every month — alerting their subject matter experts to carefully check these papers and take appropriate action.


When moving from academic AI to applied AI, it’s important to remember the tips from Stephan and Thomas: don’t be a fence thrower, keep it simple, and fall in love with customer problems.

At Slimmer AI, we are building applied AI B2B ventures to improve people’s work. We believe that AI can and should be a ‘force for good,’ which can not only empower people to do their best work but also create greater job satisfaction and achieve far better business outcomes. Learn more on our website,

Follow us on LinkedIn and Twitter for more stories like this.