Investigators use many types of evidence to help them identify and prosecute a suspect in a criminal case: from basic evidence such as eyewitness identification to highly complex forensic evidence including DNA analysis.

Fingerprints are one of the most common types of “clues” investigators seek, whether at the scene of a crime or on a weapon allegedly used in the commission of a crime.

Although fingerprint evidence has been used as un-refutable proof in criminal cases for more than a century, there is increasing concern that fingerprints are not as reliable as prosecutors, juries, and even scientists once thought.

[En Español]

How long have police been using fingerprint evidence?

As early as the 14th century, a Persian doctor noted that no two sets of fingerprints are alike, a notion that has allowed prosecutors to use fingerprint evidence as proof of identification. However, it was not until the 19th century that fingerprint evidence was used in criminal investigation. In 1892, an Argentinian police officer identified a woman as the murderer of her two children based on her bloody fingerprint left on a door post.

Since then, fingerprint analysis has been heralded as infallible, based largely upon the theory that fingerprints are a unique identifying feature.

How are fingerprints collected and identified?

Fingerprints, unless left as a clear impression in a substance or upon glass, are often invisible to the naked eye. Collecting and analyzing fingerprints left at a crime scene can be a complex process.

The patter of a fingerprint is determined by friction ridges in the fingertips. These friction ridges contain sweat pores, and as a person places his or her hand on a smooth surface, the sweat, dirt, and body oils transfer to the surface, leaving a fingerprint impression.

Investigators make these fingerprints visible through the use of special dust and chemicals. The visible prints are then “lifted” and sent to a lab for analysis.

Although it may be difficult to get a complete print, fingerprint analysts use enhancement techniques to obtain a fuller image of the fingerprint, making an identification based upon the enhanced fingerprint.

How reliable are fingerprints as evidence?

Police, prosecutors, and juries are often quick to think of fingerprint analysis as infallible. They believe that, since no two fingerprints are alike, the presence of a person’s fingerprints at a crime scene is undeniable evidence of guilt.

However, fingerprint evidence is far from infallible.

Because there is no way to determine the date a fingerprint was transferred to a surface, the mere presence of a fingerprint shows only that a person was in that location or touched that object at some point in time. It does not indicate that the person was there when the crime was committed or used that object to commit a crime.

Furthermore, there is increasing concern that the technologies used to enhance partial prints or smudged prints may be inaccurate.

To find a “match,” or positive fingerprint identification, the analyst must find a specific number or points on the fingerprint found at the scene that match with the fingerprint of the suspect. Even experts disagree as to the number of matching points that indicate a true match, with some requiring 20 points and others requiring as few as 12.

In 2002, a federal judge ruled that fingerprint evidence did not meet the Supreme Court’s standards for scientific evidence, saying it failed to meet three of the four standards. However, only a few months later, the judge changed his mind, allowing an FBI agent to testify that a fingerprint left at a crime scene was a match.

Fingerprint evidence, and other types of scientific evidence, is often hard to dispute, simply for the fact that any type of identification deemed “scientific” is often widely perceived to be reliable. However, as technologies evolve, many once highly-regarded investigative techniques—bite mark analysis, polygraph testing—are being shown to be less reliable than once thought.