While deep learning has shown remarkable performance in medical imaging benchmarks, translating these results to real-world clinical deployment remains challenging. Models trained on data from one hospital or population often fail when applied elsewhere due to distributional shifts. Since acquiring new labeled data is often costly or infeasible due to rare diseases, limited expert availability, and privacy constraints, robust solutions are essential.
Research projects in Information Technology
Displaying 1 - 10 of 195 projects.
PhD/RA opportunities on Multimodal We have several PhD and Research Assistant (RA) opportunities available in areas such as Multimodal Large Language Models (MLLM) for human understanding, MLLM safety, and Generative AI. If you have published in top-tLLM
We have several PhD and Research Assistant (RA) opportunities available in areas such as Multimodal Large Language Models (MLLM) for human understanding, MLLM safety, and Generative AI.
If you have published in top-tier conferences (e.g., CVPR, ICCV, ECCV, NeurIPS, etc.), you will have a strong chance of receiving a full PhD scholarship.
Generative-AI driven Requirements Regulation in Space Missions
see all the details here: https://careers.pageuppeople.com/513/cw/en/job/687090/phd-opportunity-on-generativeai-driven-requirements-regulation-in-space-missions
Designing Responsible AI for Scalable Educational Assessment and Feedback
As education systems increasingly adopt AI to support teaching and learning, the automation of assessment and feedback processes has emerged as a critical area of innovation. Large-scale learning environments—such as MOOCs, online degrees, and data-intensive Learning Management Systems—necessitate scalable solutions to provide timely, high-quality feedback. However, existing AI-powered assessment systems often raise ethical, pedagogical, and fairness concerns, including issues of bias, explainability, and learner agency.
Designing Secure and Privacy-Enhancing Frameworks for Digital Education Credentials
This project will investigate the security and privacy challenges emerging from the adoption of digital education credentials, such as W3C Verifiable Credentials. As universities and employers increasingly rely on digital systems to issue, store, and verify qualifications, new risks arise—ranging from data breaches and identity fraud to profiling and surveillance through credential verification logs.
Testing AI/LLM systems
In this project, we will develop automated approach to detect the defects in AI systems, including LLMs, auto-driving systems, etc.
Automated software testing and debugging with/without LLMs
The objective of this project is to design automated approach to detect bugs in various software, e.g., compilers, data libraries and so on.
The project may involve LLMs.
Using AI and machine learning to improve polygenic risk prediction of disease
We are interested in understanding genetic variation among individuals and how it relates to disease. To do this, we study genomic markers or variants called single nucleotide polymorphisms, or SNPs for short. A SNP is a single base position in DNA that varies among human individuals. The Human Genome Project has found that these single letter changes occur are all over the human genomes; each person has about 5M of them! While most SNPs have no effect, some can influence traits or increase the risk of certain diseases.
Minimum Message Length
Minimum Message Length (MML) is an elegant information-theoretic framework for statistical inference and model selection developed by Chris Wallace and colleagues. The fundamental insight of MML is that both parameter estimation and model selection can be interpreted as problems of data compression. The principle is simple: if we can compress data, we have learned something about its underlying structure.
Agentic AI for Software Teams: Building the Next Horizon of SWE Agents for Society with Atlassian
🎯 Research Vision
The next generation of software engineering tools will move beyond autocomplete and static code generation toward autonomous, agentic systems — AI developers capable of planning, reasoning, and improving software iteratively. This project explores the development of agentic AI systems that act as intelligent collaborators: understanding project goals, decomposing problems, writing and testing code, and learning from feedback.
🔍 Research Objectives