Machine Learning (ML) models are deployed in many safety-critical systems (such as self-driving cars, cancer detection software, etc.) to improve human decision-making. Therefore, safety is central to the success of many human-in-the-loop systems that deploy such ML models. This project will focus on verifying properties of ML models that will provide formal safety guarantees to the ML models using the effective solution methodologies offered by Imandra, a powerful Artificial Intelligence (AI) automated reasoning tool for the analysis of various kinds of systems, including autonomous, financial, model-based, amongst others. This project is expected to deliver safe ML models in order to help humans make better data-driven decisions.