How can we…use AI to take bio-imaging to the next dimension?

Daniel Esteban-Ferrer, CEO, VRi

23 March 2022

Categories:

Accelerate spark data science residency

Tags:

News

Super-resolution microscopy makes it possible to obtain images at the nanoscale by using clever tricks of physics to get around the limits imposed by light diffraction. This innovation, which was awarded the Nobel Prize for Chemistry in 2014, has allowed researchers to observe molecular processes as they happen. However, the data generated by this tool is huge, and there has been a lack of ways to visualise and analyse it in three dimensions.

Addressing the bio imaging challenge

My research at Cambridge focused on producing tools to allow the effective use and visualisation of three-dimensional datasets, including using virtual reality systems to visualise super-resolution microscopy images. There are many applications of this research, from drug discovery and diagnosis applications in life sciences, to materials research and engineering developments.

I helped create software called vLUME that allows super-resolution microscopy data to be visualised and analysed in virtual reality, so that researchers can ‘walk inside’ their data and examine everything from individual proteins to entire cells. This allows them to interact with data in an intuitive and immersive way, and potentially find answers to biological questions faster.

The software allows multiple datasets with millions of data points to be inputted and finds patterns in the complex data using in-built clustering algorithms. These findings can then be shared with collaborators worldwide using image and video features in the software.

A new start(up)

Having finished this research at the University of Cambridge, I created a company with the idea of working out new ways of visualizing and analysing 3D biomedical data. In 2020 VRi was born and I am focusing my efforts on it as CEO. However, I will still work with the Chemistry department, providing them with our imaging software in return for data and feedback.

VRi’s software can be used in virtual reality, but it also allows researchers to work with and visualise huge amounts of data from imaging technologies – Advanced Light Microscopy, Magnetic Resonance Imaging (MRI), (Micro) Computerized Tomography (CT), Atomic Force Microscopy, and (Cryo) Electron Microscopy – more intuitively. We’re basically developing the X-rays of the 21st century. We can digest any kind of data file (DICOM, Tiff-stacks, NIfTI, etc.), any type of medical imaging instruments data (MRI, CT, ultrasound, etc., regardless of the brand) and any scale of this data, from nano (subcellular) to macro (full body).

One of our first products, driven by the need of some pharma companies that we contacted, was an automatic Brain Atlas mapper. Many analyses of 3D bioimages rely on highly qualified individuals to map an atlas of scans into the organ - moving from 2d to 3d - taking approximately four minutes per slice. However, we use deep learning neural networks to automatically map them 1,000 times faster, saving pharmaceutical companies, hospitals and research institutes weeks of work. We use Python, Tensor flow and many other libraries to create a deformation matrix that matches our brain with a reference brain and then invert it to segment our original data using the reference Atlas.

Alzheimer lesion quantification is another application of the technology. With our platform we are able to use scans to segment and quantify brain changes arising from a number of diseases, including dementia and other neurodegenerative diseases such as Multiple Sclerosis. Give us a large number of datasets and we will do it!

I was keen to learn Python as part of the Accelerate Programme for Scientific Discovery while I was a researcher mulling over how to commercialise my research. While I don’t need to use the software while heading up VRi, because we have two dedicated data scientist and machine learning experts on my team, the knowledge I gained from the course is still incredibly useful. It would be harder to understand how they are using machine learning or speak the same language as technical team members, without the information I learned on the programme. Maybe I could have reached where I am today without it, but it would probably have taken longer.

Looking to the future

For now, our technology is largely used by researchers for blue sky science, but we plan to start working on real use cases that will more directly benefit society. One application we’re developing is the automatic measurement of the different magnitudes of blood vessels, which could help doctors prepare faster for vascular surgery. This is still work in progress but it could save at least one to two hours of tedious, repetitive work for vascular surgeons every day. We plan to have our first commercial application (for research use only) in less than one year. We hope to obtain CE and FDA certifications within the next two years, which would unlock the market to international healthcare systems. And finally, we are looking for external funding, a Chief Medical Officer, and we are always open to hire the best talent in Data Science, Machine Learning for bioimaging and Computer Vision.

Daniel Esteban-Ferrer (March 2022)