SIGNANT HEALTH UNVEILED AT DIA 2019
DIA 2019 delivered first-rate showcase for Signant Health’s new brand and subject expert insight on solutions to simplify every step of the patient’s journey through clinical research
Each year, DIA encourages important conversations about advancing research, and Signant Health’s contribution to that dialogue, underpinning the new brand and strategy, was a ruthless focus on the patient experience.
Signant executives, scientific leaders and operational experts chaired and delivered sessions across a number of important technology-related themes. Signant’s profile amidst the calibre of thought leadership at this global annual event underlines the company’s reputation in the industry for scientific, operational and regulatory expertise in the collection and validation of important eSource data, as well as recognised for a pioneering drive to innovate in eClinical science and technology.
Presentations included: a pragmatic view on the potential application of artificial intelligence in clinical trials data and processes; experience and practical considerations on the electronic collection of clinical outcome assessments in oncology and Alzheimer’s disease; leading cross-industry work on implementation of sensors and wearables to provide endpoints to support labelling claims; technology innovation in the development of novel endpoints using existing technologies; and learnings from the implementation of electronic informed consent.
Understanding brain function - ‘The face is the mirror of the mind’
As part of a session on emerging technologies in clinical research entitled ‘The face is the mirror of the mind’, Signant Health’s Bill Byrom spoke about the potential to develop novel endpoints using mobile technology to understand more about brain function and activity.
Bill cited Physicist George Edgin Pugh who stated that, “if the human brain were so simple that we could understand it, we would be so simple that we couldn’t”. The face gives us clues as to what is going on in the brain, and Bill’s presentation looked at what can be learnt from the face and voice, and how this might lead to new outcomes we can measure to understand more about health status and treatment effects. He also spoke briefly about research work in this area that he is collaborating on with scientists in the UK at Nottingham Trent University’s Medical Design Technology Research Group.
Bill shares some of his research here….
What can be learned from the face and the voice?
Measuring eye movements has already been shown of value in understanding health status in a number of conditions. Measuring treatment effects in concussion on traumatic brain injury (TBI) is challenging due to the lack of robust objective measures. However, recently, eye tracking has emerged as a promising biomarker for TBI. Studying saccadic eye movements (rapid movements away from a point of fixation) and fixation stability has been shown to have significance in Huntingdon’s disease, progressive supranuclear palsy and Parkinson’s disease (PD). In early stages of dementia, saccadic eye movement recording may help to distinguish between Lewy body dementia and Alzheimer’s disease. In addition, blink rate can provide an indirect measure of dopamine activity in the CNS – blink rate being reduced in PD and increased in schizophrenia, for example.
Facial expression can provide insights in certain disease indications. Examples include autism, depression, schizophrenia and post-traumatic stress disorder. Objective measures of facial expression may help to measure underlying emotional characteristics and can be obtained from video data by detection of facial landmarks. The change in relative position of facial landmarks can provide a measure of changes in facial expression which may provide insights relating to underlying emotions and emotional status.
Phonation tests help in the early detection of PD. Other aspects of voice acoustics – such as intonation and speaking rate – have been shown to relate to disease severity in conditions such as depression and schizophrenia. For example, more depressed patients may speak more slowly and with less pitch variation (i.e., in a more monotone manner). We can also look at the words spoken and use speech-to-text engines and sentiment analysis to assess the emotional intent of the words spoken using natural language processing.
The potential of mobile applications
This presentation explored examples of mobile technology that could be used by patients in home / unsupervised settings to collect novel endpoints and develop new insights about health status and treatment effects using features extracted from the voice and face. As part of this, Bill presented early proof-of-concept work exploring the combination of expression detection, voice acoustics and sentiment analysis developed by the Medical Design Technology Group at Nottingham Trent University, a project that Bill inputs into as a visiting researcher.
In addition to keeping abreast of all the latest industry developments, DIA 2019 was a great opportunity to connect with our valued customers and share the excitement of the launch of the new company name and brand.
Signant Health remain focused on building solutions from the patient out, delivering speed, efficiency and increased data integrity. That simplicity and clarity extends the reach of drug development, reduces investment and improves data integrity as sponsors and CROs bring life-changing therapies to our families and communities around the world.