Forcing science with brute force deep learning

Sonja Georgievska
Netherlands eScience Center
2 min readFeb 26, 2020

This post is a follow-up to the post “Monopolizing AI” by Florian Huber. More concretely, here is the point that I would like to draw attention to: recently, more and more scientific problems have been addressed by big-tech companies like Google, by applying brute-force deep learning using an “ocean” of GPU/TPU computing nodes. For a marginal improvement in scientific results, we get x% “improvement” in the global temperature. However, it does not end here. The success of Google’s models forces academic scientists to follow footsteps and use energy-greedy deep learning methods on their own problems. The demand for computing infra in academia quickly outreaches the supply, pressuring governments to invest even more in infrastructure.

And it is not like, if you are reading this and your scientific problem is not related to deep learning, you are on the safe side. A simple “google search” using keywords from your field together with the term “deep learning” will probably lead you to another team of scientists on the other side of the planet (or even perhaps next-door) and their (seemingly) impressive results using deep learning. You are pressured to use the deep learning phrase in your next grant proposal, demanding more and more computing power to stay competitive.

An impressive deep impact that a melting glacier makes to the sea surface. Photo by Curioso Photography on Unsplash

The current problem with deep learning is that, most of the time, it is used in the same way 2-year old children use toys meant for 10-year olds. They will press all the buttons until they get something impressive, regardless that meanwhile the battery will run out.

When we are children, we lack patience. When we grow up, “we lack time”.

Not all is bad, of course. Once trained, the deep neural networks are very fast when used for prediction. However, that assumes that the model is actually used in practice rather than for the incremental improvement of state-of-art, i.e. academic scores. Not to mention that, in the latter case, the published model usually leaves a lot to be desired in terms of generated scientific knowledge.

Apart from deep learning, Google is also famous for — well, their first product ever — the Google search engine. The latter must be incredibly fast as it has to work in real-time. Therefore, the time efficiency of the algorithms used here is of supreme importance, and the engineers developing it must swim without effort through the big O notation. Or, use Bubble-sort instead of Quick-sort and you are beaten by the competition.

Scarce resources, like time, in this case, stimulate creative and efficient usage of the same.

Thus, when using deep learning in science, perhaps we should instead search for inspiration in this corner of the Google Universe.

Thanks to Johan Hidding, Florian Huber, Tom Bakker and Pablo Rodriguez-Sanchez for the useful suggestions.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

Published in Netherlands eScience Center

We’re an independent foundation with 80+ passionate people working together in the Netherlands’ national centre for academic research software.

Written by Sonja Georgievska

“Insides out” of the AI bubble v. 2010s-’20s

No responses yet

What are your thoughts?