Forensic Evaluation Ltd
Dr Geoffrey Stewart Morrison BSc MTS MA PhD FCSFS, Director and Forensic Consultant

last update

  • I provide consulting on:

    • forensic speech science

    • forensic inference and statistics

  • I provide case-specific forensic analysis, advice, and/or critique of other’s reports

    • as a testifying witness or a non-testifying advisor

    • at admissibility hearings, at trial, and at appeal

  • I provide training for lawyers and forensic scientists

  • I work in a paradigm which includes:

    • Evaluation of strength of evidence using the likelihood ratio framework.

      • Recognised by leading forensic statisticians as the logically correct framework for evaluation of strength of evidence.

European Network of Forensic Science Institutes (2015) Guideline for evaluative reporting in forensic science

“The reporting of the value of scientific findings shall conform to four requirements: Balance, Logic, Robustness and Transparency.”

“Reporting practice should conform to these logical principles. This framework for evaluative reporting applies to all forensic science disciplines.The likelihood ratio measures the strength of support the findings provide to discriminate between propositions of interest. It is scientifically accepted, providing a logically defensible way to deal with inferential reasoning.”

Forensic Science Regulator for England & Wales (2021) Codes of practice and conduct: Development of evaluative opinions

“The general precepts applying to methods for the development of evaluative opinion are as follows.

  • That they are founded on a sound scientific basis and validated such that any limitations (for example in the extent or quality of data available) are known and transparently reported.

  • That they comply with the following principles in relation to evaluation in forensic science.

    • Evaluation of scientific observations is carried out within a framework of circumstances. The evaluation depends on the content of the framework.

    • Evaluation is only meaningful when two competing, mutually exclusive propositions are considered. More than one pair of propositions may be considered in the same case, depending on the issues which need to be addressed.

    • The role of the expert is to consider the probability of the observations given the propositions that are addressed, and not the probability of the propositions in light of the observations.

  • That they are based upon the four precepts ...: balance, logic, robustness and transparency.

    • Calculation of strength of evidence, on the basis of relevant data, quantitative measurements, and statistical models.

      • Such approaches are transparent, replicable, and resistant to cognitive bias.

Forensic Science Regulator for England & Wales (2020) Annual report for 2018 to 2019, Foreword:

“Whether it is data science, computer science, physics, chemistry, biology or another discipline, forensic science should be firmly rooted in good science. Courts should not have to judge whether this expert or that expert is ‘better’, but rather there should be a clear explanation of the scientific basis and data from which conclusions are drawn, and any relevant limitations. All forensic science must be conducted by competent forensic scientists, according to scientifically valid methods and be transparently reported, making very clear the limits of knowledge and/or methodology.”

President Obama’s Council of Advisors on Science and Technology (PCAST) September 2016 Report on Forensic science in criminal courts: Ensuring scientific validity of feature-comparison methods, page 47:

“Objective methods are, in general, preferable to subjective methods. Analyses that depend on human judgment (rather than a quantitative measure of similarity) are obviously more susceptible to human error, bias, and performance variability across examiners. In contrast, objective, quantified methods tend to yield greater accuracy, repeatability and reliability, including reducing variation in results among examiners. Subjective methods can evolve into or be replaced by objective methods.”

    • Empirical testing of the validity and reliability of the system used to assess the strength of evidence in the case.

      • Testing performed under conditions reflecting those of the case.

      • Such testing is the only way to determine how well a method and its implementation work.

President Obama’s Council of Advisors on Science and Technology (PCAST) September 2016 Report on Forensic science in criminal courts: Ensuring scientific validity of feature-comparison methods, page 6:

“neither experience, nor judgment, nor good professional practices (such as certification programs and accreditation programs, standardized protocols, proficiency testing, and codes of ethics) can substitute for actual evidence of foundational validity and reliability. The frequency with which a particular pattern or set of features will be observed in different samples, which is an essential element in drawing conclusions, is not a matter of ‘judgment.’ It is an empirical matter for which only empirical evidence is relevant. Similarly, an expert’s expression of confidence based on personal professional experience or expressions of consensus among practitioners about the accuracy of their field is no substitute for error rates estimated from relevant studies. For forensic feature-comparison methods, establishing foundational validity based on empirical evidence is thus a sine qua non. Nothing can substitute for it.”

  • I have casework experience in:
    • Australia (NSW, QLD, SA, VIC, WA)
    • Canada
    • Denmark
    • Northern Ireland
    • Sweden
    • United States (Federal, CO, MN)

  • I conduct forensic analyses in forensic speech science:

    • forensic voice comparison
      • where the court wants to determine whether the voice of a speaker on an audio recording was produced by a particular known speaker or by some other speaker

    • disputed-utterance analysis
      • where the court wants to determine what a speaker said at some point on an audio recording

    • My experience includes conducting analyses, submitting reports in both criminal and civil proceedings, and testifying in court.

  • I also provide critiques of reports submitted by others.

    • My experience includes submitting written critiques and testifying in court.

    • My experience includes advising the defence in relation to a US Federal Court Daubert hearing on the admissibility of forensic voice comparison testimony tendered by the prosecution.

    • Example of a critique written for a journalistic case

    • Lawyers who are concerned about the scientific validity of a forensic speech science report submitted by another expert should definitely contact me.

  • I provide testimony related to a listener’s abilities to recognise a speaker by the sound of their voice.

    • Sometimes instead of commissioning a forensic voice comparison report, a party in a court case attempts to rely on a non expert, such as a police officer, saying that they recognised a speaker’s voice.

    • Research has identified a number of factors that may make listeners better or worse at identifying speakers.

    • A key research finding is that people think that they and others are better at identifying speakers than they really are.

    • My experience includes submitting written reports and testifying in court.

  • I also provide informational reports designed to educate the court about speaker recognition in general.

    • My experience includes submitting a written report in relation to a civil case.


“I obtained great value from this workshop which was: Very well arranged. Structure was excellent. Pacing was good. Learning feedback opportunities were numerous.”

“Interactive, small group, whole day workshop, plenty of time for questions, speaker was knowledgeable and funny. Excellent all round.”

“It was nice to see how the likelihood ratio applied to real forensic evidence contexts, and working through examples helped me to understand and practice the concepts. It was helpful to work through simple and fun examples to ease into the more complex forensic problems.”

“I liked the structure of the workshop – how we started with basic concepts and applied those to scientific data. The presentation was clear and very useful. I also liked the exercises after the presentation, which helped to put the theory into practice and test our knowledge. Highly enjoyable.”

“The pace of the workshop was appropriate as it gives you time to think and develop the understanding. Content was also good as it included the application of the LR to various forensic type of evidence.”

“I liked the build up from assuming no knowledge to actually thinking about likelihood ratios. The examples were great and varied and easily relatable. It didn’t feel rushed or too slow. There was sufficient time to think and consume before moving on to the next bit. I loved the way the formulae were explained and progressed.”

“I thought that the workshop was overall very broad and informative - providing an overview of likelihood ratios while going through a few simple concrete examples. A strength of the workshop was that it was very interactive. It was nice to see how the likelihood ratio applied to forensic evidence contexts and working through examples helped me to understand and practice the concepts. I also thought that it was helpful to work through simple and fun examples to ease into the more complex forensic type problems. Great workshop overall!”

Manager of a Forensic Laboratory: “I liked the simple and effective way the workshop was presented and discussed. It could have been a lot more statistical and formulaic but Geoff kept that to a bare minimum.”

Court Appointed Forensic Advisor: “I don’t have a background in statistics, but you presented the material in a way that was really easy for me to understand. I can’t thank you enough.”

Head of the Forensic Science Division of a large Public Defender Office: “I have attended several presentations on the likelihood-ratio framework over the last few years. Yours was the first that actually made it understandable.”

Deputy Director of a National Forensic Laboratory: “The workshop was very well prepared and conducted. Although I have only a basic knowledge of likelihood ratios, you explained everything very clearly. You are most definitely one of the few people who have a great talent to be a very-very good teacher.”

I provide training for forensic scientists, lawyers, judges, and others.

Training can be provided in English or Spanish. / Los talleres se puede impartir en inglés o español.

General training is currently offered through the Forensic Data Science Laboratory, Aston University

More specific training may be offered through Foresnic Evaluation Ltd.

About Dr Morrison
“Morrison is one of the leading thinkers in the world about problems of forensic inference. Few have his ability to understand and explain forensic statistics.”

William C Thompson
Professor Emeritus, School of Law, and Department of Criminology, Law & Society, University of California Irvine

“Your paradigm article [2022 Advancing a paradigm shift in evaluation of forensic evidence: The rise of forensic data science] is an excellent piece. It should be a guiding light for quite some time (for anyone interested in being guided). You see both the promise and the challenges more clearly and completely than most of us.”

Michael J Saks
Regents Professor, Sandra Day O’Connor College of Law and Department of Psychology, Arizona State University

In addition to my consulting work, I am:

  • Associate Professor of Forensic Speech Science, Aston University

  • Director of the Forensic Data Science Laboratory, Aston University

  • Fellow of the Chartered Society of Forensic Sciences

  • Chair of the Forensic Science Committee of the British Standards Institution

My previous appointments include:

  • Simons Foundation Visiting Fellow, Probability and Statistics in Forensic Science Programme, Isaac Newton Institute for Mathematical Sciences

  • Scientific Counsel, Office of Legal Affairs, INTERPOL

  • Adjunct Professor, Department of Linguistics, University of Alberta

  • Director, Forensic Voice Comparison Laboratory, School of Electrical Engineering & Telecommunications, University of New South Wales

I have also been:

I am involved in the development of standard for foresnic science via membership in commitees, subcomittees, and working groups of:

  • British Standards Institution (BSI)

  • International Organization for Standardization (ISO)

I have authored more than 60 peer-reviewed journal articles, law review articles, book chapters, and conference proceedings papers.

I have provided training and/or research and development services to lawyers and forensic laboratories in North America, South America, Europe, Australasia, and Asia.

For more about my work see:

Recommended reading

The following publications provide introductions to forensic speech science and evaluation of forensic evidence. They should be accessible to a broad audience including lawyers.

  • Morrison G.S., Enzinger E., Zhang C. (2018). Forensic speech science. Chapter 99 in Freckelton I., Selby H. (Eds.), Expert Evidence. Sydney, Australia: Thomson Reuters.

    • An introduction to the field of forensic speech science intended to be accessible to a broad audience.

  • Morrison G.S. & Thompson W.C. (2017). Assessing the admissibility of a new generation of forensic voice comparison testimony. Columbia Science and Technology Law Review, 18, 326–434.

  • Morrison G.S. (2018). Admissibility of forensic voice comparison testimony in England & Wales. Criminal Law Review, (1), 20–33.

    • This paper focuses on admisibility in England & Wales, and also discusses admissibility in Northern Ireland.

  • Morrison G.S., Enzinger E. (2019). Introduction to forensic voice comparison. Chapter 21 (pp. 559–634) in Katz W.F., Assmann, P.F. (Eds.) The Routledge Handbook of Phonetics. Abingdon, UK: Taylor & Francis.

  • Morrison G.S. (2014). Distinguishing between forensic science and forensic pseudoscience: Testing of validity and reliability, and approaches to forensic voice comparison. Science & Justice, 54, 245–256.

    • This paper reviews calls from the 1960s onward for forensic voice comparison to be empirically validated under casework conditions.

      • e-mail me to request a copy.

  • Morrison G.S., Enzinger E., Hughes V., Jessen M., Meuwly D., Neumann C., Planting S., Thompson W.C., van der Vloed D., Ypma R.J.F., Zhang C., Anonymous A., Anonymous B. (2021). Consensus on validation of forensic voice comparison. Science & Justice, 61, 229–309.

    • In the context of a case, given the results of an empirical validation of a forensic-voice-comparison system, how can one decide whether the system is good enough for its output to be used in court? This paper provides a statement of consensus developed in response to this question. Contributors included individuals who had knowledge and experience of validating forensic-voice-comparison systems in research and/or casework contexts, and individuals who had actually presented validation results to courts. They also included individuals who could bring a legal perspective on these matters, and individuals with knowledge and experience of validation in forensic science more broadly. We provide recommendations on what practitioners should do when conducting evaluations and validations, and what they should present to the court.

  • Koehler J.J. (2018). How trial judges should think about forensic science. Judicature, 102(1), 28–38.

    • A discussion of admissibility and reactions to the 2016 report by President Obama’s Council of Advisors on Science and Technology

  • Swofford H., Champod C. (2021). Implementation of algorithms in pattern & impression evidence: A responsible and practical roadmap. Forensic Science International: Synergy, article 100142.

    • A discussion of the advantages of the use of relevant data, quantitative measurements, and statistical models over subjective judgements based on experience.

  • Morrison G.S., Neumann C., Geoghegan P.H. (2020). Vacuous standards – subversion of the OSAC standards-development process. Forensic Science International: Synergy, 2, 206–209.

  • Morrison G.S., Neumann C., Geoghegan P.H., Edmond G., Grant T., Ostrum R.B., Roberts P., Saks M., Syndercombe Court D., Thompson W.C., Zabell S. (2021). Reply to Response to Vacuous standards – subversion of the OSAC standards-development process. Forensic Science International: Synergy, 3, article 100149.

    • Courts not to accept at face value claims of scientific validity based on the fact that published standards have been followed. We would encourage courts to enquire further so as to ascertain whether those standards are fit for purpose.

  • Basu N., Bali A.S., Weber P., Rosas-Aguilar C., Edmond G., Martire K.A., Morrison G.S. (2022). Speaker identification in courtroom contexts – Part I: Individual listeners compared to forensic voice comparison based on automatic-speaker-recognition technology. Forensic Science International, 341, 111499.

    • Question: Which is more accurate, speaker identification based on a judge listening, or expert testimony based on state-of-the-art automatic-speaker-recognition technology?

    • Answer: Expert testimony based on state-of-the-art automatic-speaker-recognition technology.


  • An initial consultation up to half an hour is free.

  • Send me an e-mail with your contact information and I will call you via skype or telephone as soon as I can.

    • e-mail address:

      geoff hyphen morrison at forensic hyphen evaluation dot net

  • I only accept commissions from courts, lawyers, and law-enforcement agencies. If you are a private individual, please retain a lawyer and ask your lawyer to contact me.