Machine Learning and Compression Systems in Communications and Healthcare
Machine learning has all sorts of applications across disciplines. Two important fields using machine learning to solve long-standing issues are communication and healthcare. Dr. Thomas Wiegand, executive director and professor at the Fraunhofer Henrich Hertz Institutionalization, goes over exciting advances made in these disciplines due to machine learning.
[Machine Learning Guide: 20 Free ODSC Resources to Learn Machine Learning]
Machine learning has vast potential in the realm of communications in two different areas: video compression and mobile networks. Both may require complex algorithms to keep up with advancements.
Classical compression systems are notoriously clunky. The user sees a high-quality video that’s been compressed from the source for ease of transmission and may not think much about how that video gets there.
Around 80% of all internet bits are actually compressed videos, with computers processing a sequence of pictures in encoding and decoding order. H.264 is the current standard, but Wiegard notes that H.265 is beginning to catch on. In the 90s, standard H.261 was 25 pages long. H.264 comprises 264 pages. H.265 is over 600 pages. This is a complicated system, and we’re losing the ability to process quickly without some kind of machine intervention.
As video encoding gets better, the complex decoding process becomes too bulky for basic algorithms to handle. Wiegard’s team inserted machine learning algorithms to learn those algorithms through experience.
The second piece of compression deals with human visual perception. The traditional set up allows someone to see something on display and to ask questions, requiring responses as a way to check quality. This kind of quality assessment is very subjective.
Wiegard’s team bypasses those subjective assessments by extracting quality directly from brain signals. They’ve measured signals through EEG and derived a measure for visual signals in how the brain perceives the video. Those quantitive signals were fed into the ML algorithm.
With 5G now here, there are key performance indicators (KPIs) assigned to each traffic network. We would be able to see how 4G fulfills those KPIs with the hope that 5G would be just as good or better. These form the basis of the communication infrastructure. While it’s impossible to fulfill the KPIs at the same time, the goal is to increase 5G’s KPI from 4G.
This development may have far-reaching effects on communications and the Internet of Things. Anything synced through communication to the system has to be reliable to keep up. Mobile operators have to figure out how to manage 5G investment because it’s expensive. Which KPI do you choose?
Enhanced broadband has ten times throughput (bits/s per km2). We can beef up spectral efficiency using Massive MIMO, small cell posts to increase cell density overall, and go to a higher bandwidth for higher frequencies. This is the new 5G architecture. While you’re initially located using existing 4G infrastructure, the addition of 5G infrastructure gives you those capabilities.
From here, machine learning takes over prediction because the data coming in from the newest 5G infrastructure happens on such a broad scale. Prediction isn’t the only context. You also have:
- end to end network slicing
- IoT edge computing
- RAN feature extraction
- Caching for MEC
- Handover optimization
- Traffic classification
- many many more.
Wiegand’s institute is involved in a multitude of projects that use machine learning, including fMRI, gait analysis and others, and while these were excellent individual projects for machine learning, Wiegand is more interested in expanding those projects to a global scale.
For example, one project involving ECG and radio transmission could be adapted for use around the world because doctors don’t always have time to monitor those results. His institute worked with the World Health Organization to identify use cases where AI could transform the healthcare profession. The AI For Good summit seeks to use AI not just for individual projects but as a way to solve issues globally and to evaluate those solutions regularly.
The ITU and WHO Focus Group on Artificial Intelligence For Health was created in July 2018, and one of the first real-world problems the group is focusing on is the worldwide shortage of health workers. One potential application of AI is on making more accurate diagnoses, which could mitigate the shortage to some degree.
Two data mapping issues from this problem alone are mapping data to generate a smart diagnosis and using data and the diagnosis for logical treatment plans. Instead of typing symptoms into the internet and getting a grave diagnosis, AI could look at results from a variety of diagnostic tools to come to the right conclusion. Then, that diagnosis plus other data could help the patient proceed from the diagnosis.
Assessment framework for this project could be:
- standardized input data sets
- confirmation after diagnosis and treatment for each patient
- both public training and private test sets
- metrics for comparison
- allowing the algorithms to compete for accuracy with public results
[Related article: 5 Fields Hiring Data Scientists For 2019]
Conclusions From Wiegard
The most critical piece of the communications portion remains video compression because this makes up the bulk of the internet, but other communications issues will benefit from machine learning as we make the jump to newer, faster, broader types of communications. As the world gets bigger, developing countries and those with aging populations (as a result of better healthcare) will see continued issues in keeping up with healthcare demand. AI could take these pressures off and create a more efficient world.
This video was taken at ODSC London 2018 — attend ODSC East 2019 this April 30 to May 3 for more unique content! Subscribe to our YouTube channel for more videos taken at past conferences.