Many machine learning (ML) approaches have been applied to biomedical data but without substantial applications due to the poor interpretability of models. Although ML approaches have shown promising results in building prediction models, they are typically data-centric, lack context, and work best for specific feature types. Interpretability is the ability of an ML model to identify the causal relationships among variables. It is crucial for uncovering new insights, justifying a model performance and ultimately building trust in users for further applications. One way to achieve higher model interpretability is by using integrative approaches to analyse and predict target classes based on the context of prior biological knowledge. This study aims to propose accurate and interpretable cancer prediction models (e.g. tumour progression, tumour-drug sensitivity, survivability) by integrating multiple and heterogeneous data with associative data mining and ensemble learning methods.
Has skills in programming and machine learning.