The Insurance AI team conducts technical work to design, develop and support products that use Nearmap and third-party data to derive insurance risk insights. The Insurance Data Scientist plays a key role in helping insurance companies unlock the value of Nearmap’s AI-driven data within their pricing, underwriting, modelling, and regulatory workflows. This includes contributing actuarial expertise to ensure that risk and damage classification scores, predictive models, and derived insights are statistically robust, relevant to insurance use cases, and suitable for integration into rating plans, filings, and customer-facing solutions. A core aspect of this role will be to build, curate and grow Nearmap’s own database of policy and loss data through partnership with insurers.  A significant portion of the Insurance Data Scientist’s role will be to liaise directly with actuaries and data scientists at insurance companies to support their testing and integration of Nearmap AI data into their models, workflows and rate filings through retro tests and other ad-hoc analyses. As such, experience working as a data scientist or actuary in the property casualty (P&C) insurance space is critical.  The Insurance Data Scientist will also play a key role in developing materials and analyses to demonstrate and quantify the value of these products for insurance customers and assist in integration and with regulatory requirements. This role is ideal for a data scientist with a strong actuarial background who is excited to work at the intersection of AI, geospatial data, and insurance — and who is passionate about helping insurers improve their pricing accuracy, operational efficiency, and regulatory outcomes using innovative data sources. Skills & Experience we are looking for:Domain Knowledge – property/casualty insurance pricing, rating and regulatory requirements: Experience and comfort building insurance pricing models using property data following traditional actuarial methods (e.g., GLMs); familiarity with regulatory requirements for property/casualty insurance rating models and fluency with related statistical concepts (e.g., variable selection, overfitting, fairness testing, gini, lift  AUC, cross validation, sensitivity analysis, etc.) Data Science: Strong grasp of data science fundamentals (data analysis, feature engineering, modelling frameworks, model validation, confidence intervals, etc.), and facility at data extraction and manipulation using SQL.  Programming/Tech Environments: Ability to code in scientific python using such libraries as NumPy, Pandas, ScikitLearn and Matplotlib, and use git for source control. Communication: Excellent communication skills and experience in client-facing roles, with the ability to translate technical findings into actionable insights for insurance customers.  Scientific Approach: Follows the scientific method of formulating hypotheses, and applying statistical tests to validate them. Pragmatism: While extensive knowledge of statistical theory is highly valued, pragmatism wins over elaborate theory when it comes to shipping products that work. Collaboration: We believe data science is a team sport, and are after candidates who can communicate well, share knowledge, and be open to taking on ideas from anyone in the team. Having worked on shared code-bases in a commercial environment is a big plus, but it's the attitude that matters most. Technical Skills: A decent base of python is key to a role in the team. Other than that, we're pretty flexible - we know tools are changing rapidly, and will continue to do so for many years to come.  Attention to detail: Showing attention to detail when it counts is important. Possesses an analytical mind and a strong nose for data issues.  Highly desirable:Domain Knowledge – Geospatial Data: working with imagery and/or geospatial data science problems and related technical libraries such as GeoPandas Data / ML Engineering: Familiarity with data and/or ML engineering tools and practices, including pipeline development and scalable model deployment