Deep Learning Research in 2019: Part 2

ODSC - Open Data Science
4 min readJun 18, 2019

The deep learning revolution has continued to expand in 2019, affecting a wide range of fields from neuroscience to social media and more. In practical as well as theoretical applications, deep learning is growing more advanced and more influential. Below are some of the most interesting research papers published on the topic so far this year that chart recent developments in deep learning and what to expect in the near future.

Deep Learning for Neuroscience

Neural networks took their basic design from the human brain, using numerical “weights” to mimic the synaptic process that occurs during thought. As these systems grow more advanced, neuroscientists are actually able to rely on them as analogs of the human brain and to use neural networks in order to study aspects of cognitive functions that have otherwise proved impenetrable. As the study of these aspects of human cognition, in particular, has been difficult to tie to any particular neural function, researchers hope to use neural networks, such as Google’s Neural Translation Machine, to isolate the neural functions engaged by these actions and to apply these findings to human neuropsychology.

DLocRL: A Deep Learning Pipeline for Fine-Grained Location Recognition and Linking in Tweets

The struggles of neural networks to capture the subtleties and varieties of usage in language have slowed the adoption of this technology for real-world usage. Such software, if improved, has tremendous potential for commercial applications. To this end, a team of researchers has developed a program called Deep Pipeline for Fine-Grained Location Recognition and Linking in Tweets known as DLocRL. As the researchers note, “Users often reveal their locations and describe what they are doing in tweets,” though the use of informal language may prevent a machine from detecting the meaning. By using a deep neural network to filter for context, discern between multiple meanings of a single word or phrase, and compare the content with the location of the user at the time of the tweet (enabled by Twitter’s location feature), this pipeline will be able to provide an accurate picture of real-time sentiment surrounding a particular business or location-based event.

Deep Learning for Multiple-Image Super-Resolution

Deep networks have previously been used only for single image super-resolution, but a team of researchers has published a report detailing their use of deep learning tools in multi-image super-resolution. Super-resolution, or the derivation of a high-resolution image from a lower-resolution original, relies on a wide set of data to accurately rebuild images; multiple-image SSR, per the paper’s authors, “benefits from information fusion and in general allows for achieving higher reconstruction accuracy.” This is borne out in their experiments, which found that deep learning-based multiple-image SSR already has been able to match the output of the most advanced single image SSR, despite the still-inchoate state of the technology behind the former. Multiple-image SSR, once it becomes more advanced, may provide valuable insight into imaging obscurities in distant places like the cosmos or the ocean floor.

Speeding up Deep Learning with Transient Servers

The use of deep neural networks for testing purposes has, for many companies, been prohibitively costly in terms of either time or money. In what is described as “the first large-scale empirical testing of transient servers,” a team of researchers has sought to ease this financial burden by using transient GPU servers to test data models. “Transient servers,” the researchers note, “offer significantly lower costs than their on-demand equivalents with the added complication that the cloud provider may revoke them at any time.” The uncertainty of the availability of these servers was a well-noted caveat in the experiment. Ultimately, the experiment did find that transient servers could provide high-quality testing for a much lower cost but conceded that the uncertainty of their availability to be a considerable drawback. However, the capacity of the transient servers still hints at the opportunity for their effective use in testing: the paper suggests the use of “dynamic transient clusters, or the addition of servers throughout the testing process, which would “allow cloud customers the flexibility to add cheaper transient servers to speed up distributed training and ensure that they always have the best server configuration given their budget and rapidly changing server prices.”

Conclusion

These papers, among others that have been presented so far this year, are notable for their innovations in deep learning and adjacent technologies. Neural networks will be able to solve many problems in business, policy, and may even provide us with insight into ourselves.

What papers would you add to the list? Let us know below!

Original post here.

Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday.

--

--

ODSC - Open Data Science

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.