Computer-Generated Evidence – Time for a New Approach

It is a well-worn theme that the law tends to lag behind technical and, in particular, computer-based developments. There is now an extreme case of this in the approach that the English courts take to the assessment of computer-generated evidence in both civil and criminal proceedings.

Since 1999, computer-generated evidence in English law cases is subject to a common law presumption that the computer producing the evidential record was working properly at the material time and that the record is admissible as real evidence. This presumption is rebuttable if evidence to the contrary is adduced in which case it is for the party seeking to produce the computer record to satisfy the court that the computer was working properly at the material time.

Before 1999, under section 69 of the Police and Criminal Records Act the position was reversed. It was then necessary to prove that a computer was operating properly and was not used improperly before any statement in a document produced by a computer could be admitted in evidence. This approach was rather cumbersome and impractical. As a result, section 69 was repealed in 1999 following a Law Commission report that found that section 69 was unnecessary.  A similar presumption already applied to technical devices such as speedometers and traffic lights and the Law Commission saw no reason why the presumption should not apply to computers. The Law Commission appeared to place considerable weight on a comment by Professor Colin Tapper made in 1991 that “most computer error is either immediately detectable or results from error entered in the data entered into the machine”.

Professor Tapper’s comment may have been correct in the early 1990s, although recent experiences in the Post Office/Horizon cases suggest that such a definitive statement on the accuracy of computer-generated evidence was not supportable for more complex IT systems, at least by the late 1990s. It is now clear that the presumption needs to be reassessed on the basis of the complexity of modern computer systems and the inevitable software programming errors that are inherent in all computer systems.  

In fact, for the last 20 years at least, it has generally been recognised by IT lawyers in software contracts, as opposed to criminal law and civil litigation, that software is inherently prone to errors. Warranty provisions in software contracts generally (and almost universally) state that software will only “be free from any material programming errors” and that “no warranty is provided that the operation of the software will be uninterrupted or error free, or that all software errors will be corrected”.  As such, software contracts recognise that software will be subject to non-material errors and that, in some instances, errors that are identified will not be rectified.

Also, it would be very rare for the acceptance criteria that are included in a software contract to require the software to be error free in order for the software to be acceptable.  Instead, the acceptance criteria included in software contracts generally state that the software must be free from critical errors (which generally refer to errors that would cause the software to cease to operate), but that the software can contain up to a specified number of major and minor errors before the software can be rejected. Major errors are frequently defined as those that cause incorrect functionality of the software. Minor errors tend to be errors which do not have an impact on the functionality or the operation of the software. The general intention is that any major or minor errors that may exist in the software at the time of the acceptance of the software will then be rectified by the software provider as part of on-going support and maintenance services.  

As a result, software contracts, in contrast to the evidential rules relating to computer-generated evidence, have recognised for many years that software is not error free. This indicates that the evidential rules relating to computer-generated evidence are out of line with the reality of software development and implementation. The evidential rules relating to computer-generated evidence do not take proper account of the reality that software is inherently “buggy”. As Bates vs Post Office Ltd (No. 6: Horizon Issues) [2019] EWHC 3408 (QB) shows, this can lead to the courts accepting computer-based evidence without assessing to any significant extent whether the software includes errors that could have an impact on the evidentiary quality of the computer-generated evidence that is being presented. 

These issues are likely to become even more relevant as AI solutions become more prevalent in computer systems. Machine learning solutions are dependent on the accuracy of the data on which they are trained. If the training data includes errors then it is highly likely that the AI will produce outputs that replicate or take account of these errors. To a certain extent, this issue was already identified in Professor Tapper’s 1991 statement which recognised that errors would arise from incorrect data “entered into the machine”, although clearly the context of the phrase was not intended to refer to machine learning AI solutions. Also, even if the data on which AI solutions are trained is error free (if that is possible) AI can produce incorrect outputs. This can happen due to various reasons, including:

  • Training Data Biases - if the AI model is trained on biased or incomplete data, it may generate outputs that reflect those biases;
  • Ambiguity in Language - AI models, especially natural language processing models, may struggle with ambiguity or context-dependent interpretations, leading to inaccuracies.
  • Lack of Specific Information - if the AI model has not been trained on specific information or if the data it was trained on is outdated, the AI solution may not provide accurate or up-to-date answers.

More generally, the outputs of machine learning AI solutions are based on probabilistic algorithmic interpretations based on the interplay of a number of algorithms within the AI solution. These outputs are essentially probabilistic, rather than deterministic (as is the case in more conventional IT systems). As the outputs of AI solutions are probabilistic the outputs may vary even where the inputs are not varied.  

Given the inherent “bugginess” of software together with the probabilistic nature of many AI solutions, the current common law presumption of the accuracy of computer-generated evidence has clearly now run its course and needs revision. This does not mean that the presumption should be reversed. It would be very wasteful of expensive and scarce court resources for the accuracy of all computer-generated evidence to have to be demonstrated in each case. Instead, there is a need for the courts to take a more nuanced approach. The courts should recognise that whilst, in most instances, computer-generated evidence is sufficiently reliable to be submitted in evidence, errors do arise in such evidence. The courts should be more alert to evidence that raises the possibility of errors in computer-generated evidence, to recognise that errors do exist in the outputs of most computer systems and to take active steps to accept evidence to the contrary where there may be reasonable grounds to suspect that there may be flaws in computer-generated evidence that is being submitted.  

Latest insights

More Insights
cipa

Payments contracts: a guide to gateway and acquiring services for in-house counsel

Apr 29 2024

Read More

Big tech in finance: 'Big tech a priority' says FCA

Apr 29 2024

Read More
EU Flag

The reform on the protection of geographical indications in the European Union

Apr 29 2024

Read More