05 April,2021 10:22 AM IST | Washington | IANS
This picture has been used for representational purpose
Researchers have developed a method that uses the camera on a person's smartphone or computer to take their pulse and respiration signal from a real-time video of their face. The development comes at a time when telehealth has become a critical way for doctors to provide health care while minimising in-person contact during Covid-19.
The University of Washington-led team's system uses machine learning to capture subtle changes in how light reflects off a person's face, which is correlated with changing blood flow. Then it converts these changes into both pulse and respiration rate. The researchers presented the system in December at the Neural Information Processing Systems conference. Now the team is proposing a better system to measure these physiological signals.
This system is less likely to be tripped up by different cameras, lighting conditions or facial features, such as skin colour, according to the researchers who will present these findings on April 8 at the Association for Computing Machinery (ACM) Conference on Health, Interference, and Learning. "Every person is different," said lead study author Xin Liu, a UW doctoral student.
"So this system needs to be able to quickly adapt to each person's unique physiological signature, and separate this from other variations, such as what they look like and what environment they are in." The first version of this system was trained with a dataset that contained both videos of people's faces and "ground truth" information: each person's pulse and respiration rate measured by standard instruments in the field.
ALSO READ
North Korean leader Kim Jong Un says past diplomacy only confirmed US hostility
Actor Denzel Washington admits to drinking 2 bottles of wine a day in the past
Paul Mescal to host ‘SNL’ on December 7
Over 40 US lawmakers push for Imran Khan’s release
Gladiator II review: Ridley Scott's sequel is stirring but short of magnificent
The system then used spatial and temporal information from the videos to calculate both vital signs. While the system worked well on some datasets, it still struggled with others that contained different people, backgrounds and lighting. This is a common problem known as "overfitting," the team said. The researchers improved the system by having it produce a personalised machine learning model for each individual.
Specifically, it helps look for important areas in a video frame that likely contain physiological features correlated with changing blood flow in a face under different contexts, such as different skin tones, lighting conditions and environments. From there, it can focus on that area and measure the pulse and respiration rate. While this new system outperforms its predecessor when given more challenging datasets, especially for people with darker skin tones, there is still more work to do, the team said.
This story has been sourced from a third party syndicated feed, agencies. Mid-day accepts no responsibility or liability for its dependability, trustworthiness, reliability and data of the text. Mid-day management/mid-day.com reserves the sole right to alter, delete or remove (without notice) the content in its absolute discretion for any reason whatsoever