Forensic Data Science


Research Grant Application (under development)

– Objectives

  • To develop relevant datasets and to develop and validate quantitative measurement and statistical modelling methods that will enable the adoption of the new paradigm for evaluation of evidence in a high-volume branch of forensic science in which so far it has made little progress: Fingerprint examination.

– Background

  • Current practice in fingerprint examination is based on visual comparison of the questioned-source fingermark and the known-source fingerprint. Conclusions are subjective judgements most commonly reported as identification/individualization, exclusion, or inconclusive.

  • Known-source prints may come from an already-selected person of interest, but are often obtained via a database search using an Automatic Fingerprint Identification System (AFIS). The top-ranked prints in the database, i.e., those that the AFIS algorithms determine to be the most similar to the questioned-source mark, are returned as a candidate list. The practitioner then visually compares the mark with the prints in the candidate list.

  • There have been attempts to calculate likelihood ratios using scores output by AFIS, but these scores are not designed for and are not suitable for this purpose: They are similarity-only scores that do not take account of typicality with respect to the relevant population. They cannot, therefore, be used to calculate a likelihood ratio that provides a meaningful answer to the question formed by the hypotheses adopted in a case. Also, the internal workings of AFIS are commercial confidential, hence concerns about transparency have been raised.

  • Forensic Science Regulator (2020) Fingerprint Research and Development Considerations identified a need for “studies to determine the evidential significance of characteristics in relation to fingerprints”, including “How might an objective measure of the strength associated with [fingerprint] features ... be provided?” (§4.2.1).

  • Statistical models designed to calculate evidential likelihood ratios have been developed and described in academic publications, but none have been adopted for regular casework. Thompson et al. (2017) Forensic Science Assessments: A Quality and Gap Analysis- Latent Fingerprint Examination, ascribes the lack of adoption of these models to a lack of appropriate data for training and validating the models, i.e., a lack of data consisting of prints plus multiple marks collected under multiple casework-reflective conditions from each of a large number of individuals. They opine that:

    • It may eventually be possible to convert latent print analysis from a discipline that relies primarily on subjective analysis of human examiners to one based on more objective quantitative measurements and statistical modeling, but that is a long-term project. (p. 68)

  • The objective of the proposed research project is to conduct the research necessary to make this conversion feasible.

– Methodology

  • The proposed research will include collection of relevant data, and development and validation of statistical models for calculating likelihood ratios.

  • Outline of data-collection plans to appear here.

  • We will use deep neural networks (DNNs) to extract features (DNN embeddings) from fingerprint and fingermark images. From the DNN embeddings, we will calculate likelihood ratios using backend models that are standard in state-of-the-art forensic voice comparison.

  • Models will be calibrated and validated using protocols based on those that are now standard in forensic voice comparison.

  • Software tools for use by practitioners will be developed, and will be field tested by partner organizations.

Project Team

Laboratory members:

– Geoffrey Stewart Morrison


William Morris

  • Senior Lecturer in Forensic Science, Nottingham Trent University

    This webpage is maintained by Geoffrey Stewart Morrison and was last updated 2023-03-16