Forensic Evaluation Ltd
Dr Geoffrey Stewart Morrison BSc MTS MA PhD FCSFS, Director and Forensic Consultant

http://forensic-evaluation.net/

last update
2021-05-10








  • I provide consulting on:

    • forensic speech science

    • forensic inference and statistics


  • I provide case-specific forensic analysis, advice, and/or critique of other’s reports

    • as a testifying witness or a non-testifying advisor

    • at admissibility hearings, at trial, and at appeal


  • I provide training for lawyers and forensic scientists



  • I work in a paradigm which includes:

    • Evaluation of strength of evidence using the likelihood ratio framework.

      • Recognised by leading forensic statisticians as the logically correct framework for evaluation of strength of evidence.

European Network of Forensic Science Institutes (2015) Guideline for evaluative reporting in forensic science

“The reporting of the value of scientific findings shall conform to four requirements: Balance, Logic, Robustness and Transparency.”

“Reporting practice should conform to these logical principles. This framework for evaluative reporting applies to all forensic science disciplines.The likelihood ratio measures the strength of support the findings provide to discriminate between propositions of interest. It is scientifically accepted, providing a logically defensible way to deal with inferential reasoning.”

Forensic Science Regulator for England & Wales (2021) Codes of practice and conduct: Development of evaluative opinions

“The general precepts applying to methods for the development of evaluative opinion are as follows.

  • That they are founded on a sound scientific basis and validated such that any limitations (for example in the extent or quality of data available) are known and transparently reported.

  • That they comply with the following principles in relation to evaluation in forensic science.

    • Evaluation of scientific observations is carried out within a framework of circumstances. The evaluation depends on the content of the framework.

    • Evaluation is only meaningful when two competing, mutually exclusive propositions are considered. More than one pair of propositions may be considered in the same case, depending on the issues which need to be addressed.

    • The role of the expert is to consider the probability of the observations given the propositions that are addressed, and not the probability of the propositions in light of the observations.

  • That they are based upon the four precepts ...: balance, logic, robustness and transparency.



    • Calculation of strength of evidence, on the basis of relevant data, quantitative measurements, and statistical models.

      • Such approaches are transparent, replicable, and resistant to cognitive bias.

Forensic Science Regulator for England & Wales (2020) Annual report for 2018 to 2019, Foreword:

“Whether it is data science, computer science, physics, chemistry, biology or another discipline, forensic science should be firmly rooted in good science. Courts should not have to judge whether this expert or that expert is ‘better’, but rather there should be a clear explanation of the scientific basis and data from which conclusions are drawn, and any relevant limitations. All forensic science must be conducted by competent forensic scientists, according to scientifically valid methods and be transparently reported, making very clear the limits of knowledge and/or methodology.”

President Obama’s Council of Advisors on Science and Technology (PCAST) September 2016 Report on Forensic science in criminal courts: Ensuring scientific validity of feature-comparison methods, page 47:

“Objective methods are, in general, preferable to subjective methods. Analyses that depend on human judgment (rather than a quantitative measure of similarity) are obviously more susceptible to human error, bias, and performance variability across examiners. In contrast, objective, quantified methods tend to yield greater accuracy, repeatability and reliability, including reducing variation in results among examiners. Subjective methods can evolve into or be replaced by objective methods.”



    • Empirical testing of the validity and reliability of the system used to assess the strength of evidence in the case.

      • Testing performed under conditions reflecting those of the case.

      • Such testing is the only way to determine how well a method and its implementation work.

President Obama’s Council of Advisors on Science and Technology (PCAST) September 2016 Report on Forensic science in criminal courts: Ensuring scientific validity of feature-comparison methods, page 6:

“neither experience, nor judgment, nor good professional practices (such as certification programs and accreditation programs, standardized protocols, proficiency testing, and codes of ethics) can substitute for actual evidence of foundational validity and reliability. The frequency with which a particular pattern or set of features will be observed in different samples, which is an essential element in drawing conclusions, is not a matter of ‘judgment.’ It is an empirical matter for which only empirical evidence is relevant. Similarly, an expert’s expression of confidence based on personal professional experience or expressions of consensus among practitioners about the accuracy of their field is no substitute for error rates estimated from relevant studies. For forensic feature-comparison methods, establishing foundational validity based on empirical evidence is thus a sine qua non. Nothing can substitute for it.”




Casework
  • I have casework experience in:
    • Australia (NSW, QLD, SA, VIC, WA)
    • Canada
    • China
    • Northern Ireland
    • Sweden
    • United States (Federal, CO, MN)


  • I conduct forensic analyses in forensic speech science:

    • forensic voice comparison
      • where the court wants to determine whether the voice of a speaker on an audio recording was produced by a particular known speaker or by some other speaker

    • disputed-utterance analysis
      • where the court wants to determine what a speaker said at some point on an audio recording

    • My experience includes conducting analyses, submitting reports in both criminal and civil proceedings, and testifying in court.


  • I also provide critiques of reports submitted by others.

    • My experience includes submitting written critiques and testifying in court.

    • My experience includes advising the defence in relation to a US Federal Court Daubert hearing on the admissibility of forensic voice comparison testimony tendered by the prosecution.

    • Example of a critique written for a journalistic case

    • Lawyers who are concerned about the scientific validity of a forensic speech science report submitted by another expert should definitely contact me.


  • I provide testimony related to a listener’s abilities to recognise a speaker by the sound of their voice.

    • Sometimes instead of commissioning a forensic voice comparison report, a party in a court case attempts to rely on a non expert, such as a police officer, saying that they recognised a speaker’s voice.

    • Research has identified a number of factors that may make listeners better or worse at identifying speakers.

    • A key research finding is that people think that they and others are better at identifying speakers than they really are.

    • My experience includes submitting written reports and testifying in court.


  • I also provide informational reports designed to educate the court about speaker recognition in general.

    • My experience includes submitting a written report in relation to a civil case.




Training

“I have attended several presentation on the likelihood-ratio framework over the last few years. Yours was the first that actually made it understandable.”

“I obtained great value from this workshop which was: Very well arranged. Structure was excellent. Pacing was good. Learning feedback opportunities were numerous.”

“Interactive, small group, whole day workshop, plenty of time for questions, speaker was knowledgeable and funny. Excellent all round.”

“It was nice to see how the likelihood ratio applied to real forensic evidence contexts and working through examples helped me to understand and practice the concepts. It was helpful to work through simple and fun examples to ease into the more complex forensic problems.”

“I liked the structure of the workshop - how we started with basic concepts and applied those to scientific data. The presentation was clear and very useful. I also liked the exercises after the presentation, which helped to put the theory into practice and test our knowledge. Highly enjoyable.”


I provide training for forensic scientists, lawyers, judges, and others.

Training can be specific to forensic speech science, or can cover evaluation of forensic evidence in general.

Training will be tailored depending on the needs of the client.

Training can be provided in English or Spanish. / Los talleres se puede impartir en inglés o español.

Below are outlines of introductory workshops.

  • Introduction to logical reasoning for the evaluation of forensic evidence

    • Slides:

    • Audience: forensic scientists and/or lawyers


      This half-day workshop provides an introduction to the likelihood-ratio framework for the evaluation and interpretation of forensic evidence.

      There is a great deal of misunderstanding and confusion about the likelihood-ratio framework among lawyers, judges, and forensic scientists.

      The likelihood-ratio framework makes explicit the questions which must logically be addressed by the forensic scientist and considered by lawyers, judges, and trier’s of fact in assessing the work of the forensic scientist.

      This workshop explains the logic of the likelihood-ratio framework in a way which is accessible to a broad audience and which does not require any prior knowledge of the framework. It uses intuitive examples and audience-participation exercises to gradually build a fuller understanding of the likelihood-ratio framework.

      The workshop also includes discussion of common logical fallacies.

      Other workshops Dr Morrison presents generally assume familiarity with the material presented in this workshop.


  • Calculating the strength of forensic evidence, and testing the validity and reliability of forensic-evaluation systems

    • Audience: forensic scientists or lawyers (different depth of coverage and focus depending on the audience)


      This half-day workshop provides an introduction to topics such as the calculation of forensic likelihood ratios on the basis of relevant data, quantitative measurements, and statistical models, and an introduction to empirically assessing the validity and reliability of forensic-evaluation systems.

      Some of the topics listed below can also be presented as stand alone tutorials.

      Audience members are assumed to already have a basic understanding of the logic of the likelihood-ratio framework, e.g., by having participated in my workshop “Introduction to logical reasoning for the evaluation of forensic evidence”.

      Topics covered may include:

      • basic statistical models for calculating likelihood ratios

      • calibrating forensic-evaluation systems

      • empirically testing the validity and reliability of forensic-evaluation systems

        Slides:



About Dr Morrison
“Morrison is one of the leading thinkers in the world about problems of forensic inference. Few have his ability to understand and explain forensic statistics.”

Prof William C Thompson
School of Law, and Department of Criminology, Law & Society, University of California Irvine
Co-counsel for OJ Simpson in his criminal trial in Los Angeles, 1994–1995
Originator of the terms “prosecutor’s fallacy” and “defense attorney’s fallacy”


In addition to my consulting work, I am:

  • Associate Professor of Forensic Speech Science, Aston University

    • Director, Forensic Data Science Laboratory, Department of Computer Science

    • Director, Forensic Speech Science Laboratory, Aston Institute for Forensic Linguistics


My previous appointments include:

  • Simons Foundation Visiting Fellow, Probability and Statistics in Forensic Science Programme, Isaac Newton Institute for Mathematical Sciences

  • Scientific Counsel, Office of Legal Affairs, INTERPOL

  • Adjunct Professor, Department of Linguistics, University of Alberta

  • Director, Forensic Voice Comparison Laboratory, School of Electrical Engineering & Telecommunications, University of New South Wales


I am also:

  • Fellow of the Chartered Society of Forensic Sciences

  • Chair of the Forensic Science Committee of the British Standards Institution


I have also been:

I am involved in the development of standard for foresnic science via membership in commitees, subcomittees, and working groups of:

  • British Standards Institution (BSI)

  • International Organization for Standardization (ISO)


I have authored more than 50 peer-reviewed journal articles, law review articles, book chapters, and conference proceedings papers.


I have provided training and/or research and development services to lawyers and forensic laboratories in North and South America, Europe, Australasia, and Asia.


For more about my work see:



Recommended reading

The following publications provide introductions to forensic speech science and evaluation of forensic evidence. They should be accessible to a broad audience including lawyers.


  • Morrison G.S., Enzinger E., Zhang C. (2018). Forensic speech science. Chapter 99 in Freckelton I., Selby H. (Eds.), Expert Evidence. Sydney, Australia: Thomson Reuters.

    • An introduction to the field of forensic speech science intended to be accessible to a broad audience.


  • Morrison G.S. & Thompson W.C. (2017). Assessing the admissibility of a new generation of forensic voice comparison testimony. Columbia Science and Technology Law Review, 18, 326–434. https://doi.org/10.7916/stlr.v18i2.4022

  • Morrison G.S. (2018). Admissibility of forensic voice comparison testimony in England & Wales. Criminal Law Review, (1), 20–33.

    • This paper focuses on admisibility in England & Wales, and also discusses admissibility in Northern Ireland.


  • Morrison G.S., Enzinger E. (2019). Introduction to forensic voice comparison. Chapter 21 (pp. 559–634) in Katz W.F., Assmann, P.F. (Eds.) The Routledge Handbook of Phonetics. Abingdon, UK: Taylor & Francis.

  • Morrison G.S. (2014). Distinguishing between forensic science and forensic pseudoscience: Testing of validity and reliability, and approaches to forensic voice comparison. Science & Justice, 54, 245–256. http://dx.doi.org/10.1016/j.scijus.2013.07.004

    • This paper reviews calls from the 1960s onward for forensic voice comparison to be empirically validated under casework conditions.

      • e-mail me to request a copy.


  • Morrison G.S., Enzinger E., Hughes V., Jessen M., Meuwly D., Neumann C., Planting S., Thompson W.C., van der Vloed D., Ypma R.J.F., Zhang C., Anonymous A., Anonymous B. (2021). Consensus on validation of forensic voice comparison. Science & Justice, 61, 229–309. https://doi.org/10.1016/j.scijus.2021.02.002

    • In the context of a case, given the results of an empirical validation of a forensic-voice-comparison system, how can one decide whether the system is good enough for its output to be used in court? This paper provides a statement of consensus developed in response to this question. Contributors included individuals who had knowledge and experience of validating forensic-voice-comparison systems in research and/or casework contexts, and individuals who had actually presented validation results to courts. They also included individuals who could bring a legal perspective on these matters, and individuals with knowledge and experience of validation in forensic science more broadly. We provide recommendations on what practitioners should do when conducting evaluations and validations, and what they should present to the court.


  • Koehler J.J. (2018). How trial judges should think about forensic science. Judicature, 102(1), 28–38.

    • A discussion of admissibility and reactions to the 2016 report by President Obama’s Council of Advisors on Science and Technology

  • Swofford H., Champod C. (2021). Implementation of algorithms in pattern & impression evidence: A responsible and practical roadmap. Forensic Science International: Synergy, article 100142. https://doi.org/10.1016/j.fsisyn.2021.100142

    • A discussion of the advantages of the use of relevant data, quantitative measurements, and statistical models over subjective judgements based on experience.


  • Morrison G.S., Neumann C., Geoghegan P.H. (2020). Vacuous standards – subversion of the OSAC standards-development process. Forensic Science International: Synergy, 2, 206–209. https://doi.org/10.1016/j.fsisyn.2020.06.005

  • Morrison G.S., Neumann C., Geoghegan P.H., Edmond G., Grant T., Ostrum R.B., Roberts P., Saks M., Syndercombe Court D., Thompson W.C., Zabell S. (2021). Reply to Response to Vacuous standards – subversion of the OSAC standards-development process. Forensic Science International: Synergy, 3, article 100149. https://doi.org/10.1016/j.fsisyn.2021.100149

    • Courts not to accept at face value claims of scientific validity based on the fact that published standards have been followed. We would encourage courts to enquire further so as to ascertain whether those standards are fit for purpose.



Links


Contact
  • An initial consultation up to half an hour is free.


  • Send me an e-mail with your contact information and I will call you via skype or telephone as soon as I can.

    • e-mail address:

      geoff hyphen morrison at forensic hyphen evaluation dot net