Thomas J. Royston, PhD – The Audible Human Project: Hearing What The Body Has To Say
We use sound to detect problems every day, probably more than we realise. When a mechanic asks a customer what seems to be wrong with their car, the answer is often a description of the unusual sound it has started to make. The mechanic usually knows what an engine should sound like, and can often diagnose the problem by listening. Historically, the movement of sound waves through the body has been, and remains, a highly valuable source of information for clinicians. Like mechanics, doctors use sound to aid in their diagnoses of diseases and injury. We are all familiar with the use of the stethoscope in medicine, but sound waves are also integral to cutting edge imaging methods which rely on elastography. These techniques, which are integrated into MRI or ultrasound, can analyse the elastic properties of soft tissues by detecting and imaging the vibratory motion produced by sound waves moving through them. Diseases of the liver, for example, can cause it to stiffen, while cancerous tumours will usually be harder than the tissues surrounding them, and are therefore detectable by elastographic methods.
Professor Tom Royston began his academic life studying engineering as an undergraduate in the late 80s ‘During my undergraduate and graduate studies in engineering I was drawn to the area of acoustics and vibrations.’ he recalls. ‘The mathematics of wave theory was challenging and wide-reaching, while the experimental work was fun to do. Back then I was focused on active sound and vibration control, which was a relatively hot area of practical development in the 80s and early 90s.’ He continued in this field throughout his postgraduate studies. The same year that Professor Royston completed his PhD, he was appointed as director of the Acoustics and Vibrations Laboratory at the University of Chicago (UIC). Upon joining UIC, he was approached by Dr Richard Sandler, who was working at the nearby Rush Medical Centre. Dr Sandler also had a degree in engineering and was interesting in applying mathematical analysis to sounds recorded by stethoscopes and other acoustic contact sensors. They worked together with another engineer, Dr Hansen Mansy, and started to apply their combined knowledge of engineering, acoustics, vibration control and the mathematics of wave theory to medical diagnostic purposes.
Among many early projects, one stood out to Professor Royston as being a particularly interesting challenge: ‘we wanted to identify the best acoustic approaches to quickly and reliably identify a pneumothorax, or collapsed lung’, he explains. The lungs are probably the most difficult organs from which to gather useful diagnostic information. ‘From an acoustic perspective, the lungs make the rest of the body seem kind of boring’, Professor Royston tells us. This is due to the fractal nature of the architecture of the lungs, where the highly complex bronchial airway tree branches through the tissues producing self-symmetry over multiple dimensional scales. This complexity is also a problem for conventional imaging techniques such as ultrasound and magnetic resonance imaging (MRI). Professor Royston envisioned applying computer modelling methods to the analysis of data gained from acoustic measurements from contact sensors (stethoscopes), using either passive listening or percussive techniques which measure response to externally generated acoustic stimuli.
Around the same time, Professor Royston was contacted by PhD student, Shadi Othman, who was being supervised by a Professor of Bioengineering at UIC, Richard Magin, PhD, an expert in magnetic resonance imaging (MRI). They were interested in developing an imaging technique called magnetic resonance elastography (MRE) and needed someone with a background in acoustics to collaborate with. Professor Royston realised that his goal of using computer modelling to simulate the movement of sound in the body would be the key to interpretation of their MRE measurements. The logical outgrowth of these two ongoing efforts was the development of the Audible Human Project.
‘The purpose of the Audible Human Project has been to develop a computer simulation model of how sound travels in the body and is altered by disease or injury in order to improve stethoscope-based and elastography-based diagnostic methods’
The big picture for the Audible Human Project
The main goal of the Audible Human Project is to develop a comprehensive computer simulation model of sound and vibration in the body. This project can also be seen as a complement to the Visible Human Project, wherein male and female cadavers were cut into very thin cross-sectional slices to build a detailed visual dataset of the inside of the human body. As mentioned before, the stethoscope is viewed as a symbol of medicine around the world, but Professor Royston’s vision for the Audible Human Project was that more than just the qualitative, subjective and skilldependent nature of this instrument – sound can also be used for diagnostic techniques which are more quantitative, objective and automated. This is not so much to replace older methods such as the stethoscope, but to complement them and provide another highly useful additional resource to the medical toolbox.
Listening to the lungs
As mentioned before, the complexity of the lungs makes them a challenge for acoustics-based diagnostic methods. Though a collapsed lung causes a major change in the geometry of the lung, and should therefore produce a concomitant and detectable change in its acoustic properties, confounding factors can still affect these properties, which the simulations of the Audible Human Project must take into account. Pulmonary edema (fluid in the lungs), effusion and mucous plugs all have an effect on the way sound moves through the lungs. Many diseases of the lungs also cause changes in their acoustic properties. In chronic obstructive pulmonary disease (COPD) for example, the walls between the alveoli (air sacs where gas exchange takes place) break down, meaning there are fewer, but larger, alveoli. The bronchioles (very small airways leading to the alveoli) also lose their shape and can fill with mucous. Other lung diseases can lead to fibrosis, scarring, narrowing of airways or changes in their elasticity. Cancer also produces changes in the structure and composition of the lungs. Lung cancer remains the most common cancer-related cause of death in men and women, killing around 1.4 million people per year, so any advancements in the area of its diagnosis are sure to have a positive effect on people’s lives. One of the main ideas behind the Audible Human Project is that these microscopic mechanical changes will cause macroscopic changes in the acoustic properties of the lungs, which will be characteristic of the particular disease elements present. A better understanding of these changes will lead to improvements in the detection and diagnosis of lung diseases.
Work on the project so far
Work so far on the project has taken multiple variables into account in developing the computer simulations, such as tissue viscoelasticity, fractal airway modelling and different breathing sounds or externally generated percussion sounds. Each of these variables poses its own mathematical and engineering challenges for Professor Royston’s team to overcome. Synthetic models of lungs, constructed from a gel called ‘Ecoflex’ filled with air passages, which shares many properties with body tissues, were used by the team to gather data on how sound moves them. The team measured the movement of sound through this model using a piece of equipment more commonly used in engineering than for biomedical purposes, called a scanning laser Doppler vibrometer. This instrument shines laser light onto a surface and then measures the reflected light. As the surface vibrates, the colour (wavelength) of the light is altered by a small amount, due to the Doppler effect created by the surface moving closer or further away from the instrument. Videos of these measurements made on the Ecoflex model using different sound frequencies, can be found on YouTube, by searching for Audible Human Project.
The predictions made by the team’s simulations were tested against experimental measurements performed on pig lungs, first in just the major airways and later on whole lungs. Testing was also performed on human clinical subjects. The results so far are highly promising, with experimental measurements producing similar data to the software’s predictions, though there is still a great deal of work left to be performed in this area, in refining the predictive power of the model.
Future work: Expansion into the cardiopulmonary environment
Professor Royston plans to expand the scope of the Audible Human Project beyond the lungs, to producing simulations of the cardiopulmonary environment and how it is affected by pulmonary hypertension. He hopes that in the future, their research will be useful in identifying additional diagnostic signatures of pulmonary hypertension which can be identified through imaging. Like the team’s work on the lungs, these indicators might also be detectable acoustically by using contact sensors or ultrasonic methods, such as echocardiography and elastography, as well as magnetic resonance methods, either independently or by combining multiple approaches. Combining the Audible Human Project with current and future diagnostic methods will make it an invaluable tool in the detection of pulmonary hypertension. The non-invasive nature of these acoustic techniques is highly useful in gaining diagnostic information in this area, as well as the lungs. Pulmonary arterial pressure, vascular resistance, cardiac output as well as the type of pulmonary hypertension are all measurements which are difficult to acquire currently without using invasive techniques, and all could have medical value beyond just the management of pulmonary hypertension.
‘From an acoustic perspective, the lungs make the rest of the body seem kind of boring’
Future work: Multiscale capability and enhanced visualisation
Professor Royston’s grandest plans for the Audible Human Project are those relating to the multiscaling of its capabilities and enhanced visualisation. Spatially, the project is currently limited to the whole lung and larger airways, but Professor Royston’s vision is one of the Multiscale Audible Human Project. Once developed, he believes the simulations could work across multiple spatial scales, from the cellular up to the whole organ. He hopes to use fractal and fractional calculusbased mathematical methods to more effectively use macroscopically measured mechanical wave motions to make quantitative predictions of microscopic changes at the cellular and tissue level which could be indicative of disease or injury.
Enhanced visualisation is another major area of development in the Audible Human Project’s future. Professor Royston plans to use the emerging technologies of 3D visualisation and augmented reality to enhance the project’s usefulness, seeing conventional 2D monitor viewing methods as a limitation for a methodological toolkit such as this. These techniques would allow better visualisation of the data for diagnosticians and aid in their interpretation of it.
Once up and running, the initial focus of the program will be the detection of difficult-to-diagnose pathologies of the pulmonary system. Professor Royston plans to make the source code for the program freely available in all formats. There is still a lot of work to be done on the Audible Human Project, but the team are making great progress in this exciting field, which is sure to be of great benefit to medical science in years to come.
Meet the researcher
Professor and Head of Bioengineering
College of Engineering & College of Medicine University of Illinois at Chicago USA
Professor Tom Royston performed his doctoral studies on active sound and vibration control at the Ohio State University, earning a PhD in Mechanical Engineering in 1995 and later being awarded an NSF Career Award to expand upon this work. He then joined the Mechanical Engineering department at University of Illinois at Chicago (UIC) where upon speaking with a clinician scientist, he became interested in applying his knowledge to medicine. He is a recipient of the Acoustical Society of America Lindsay Award (2002) and his work on the Audible Human Project has been recognised with the NIH National Institute for Biomedical Imaging and Bioengineering Nagy New Investigator Award (2014). Tom is also a Fellow of the American Society of Mechanical Engineers (2007). He became department head of the Richard and Loan Hill Department of Bioengineering at the UIC in 2009, where he has overseen its expansion into both UIC Colleges of Engineering and Medicine.
KEY COLLABORATORS
Hansen A. Mansy, PhD, University of Central Florida
Richard H. Sandler, MD, Nemours Children’s Hospital
Robert A. Balk, MD, Rush University Medical Center
Richard L. Magin, PhD, University of Illinois at Chicago
Dieter Klatt, PhD, University of Illinois at Chicago
Cristian Luciano, PhD, University of Illinois at Chicago
Eric J. Perreault, PhD, Northwestern University
J. Edward Colgate, PhD, Northwestern University
FUNDING
NIH
NSF