FRONTEO’s proprietary AI-assisted review service

A turn-key solution to help you reduce and predict cost of Review service

For high scoring batches, due to the high number of responsive documents, we have found review speed to be slow, on average 20 documents/hr. In contrast, low scoring batches contain a greater proportion of non-responsive documents and have a faster review speed, of around 60 documents/hr and sometimes higher. Figure 3 illustrates the change in review speed due to scored batching within AI Review using KAM compared to linear review.

Figure 3: Review Speed: AI Review vs. Linear Review


1.3  Visualized QC (Heat Map)

The QC Heat Map compares the AI-modeled prediction assigned to a document against the human result (a “before and after”) and presents these visually by color to empower the Review Manager to identify where the human reviewers and computer disagree about the classification or coding of a document. QC can be efficiently and effectively managed with the assistance of this technology.

Each individual box represents a set of documents within a range of KIBIT relevancy scores. The box color is determined by the percentage of documents a reviewer has coded relevant. If most or all of the documents are coded non-relevant, then the box is green. If most or all of the documents are coded relevant, then the box is red. If the reviewer has coded likely non-relevant documents as non-relevant and likely relevant documents as relevant, the reviewer’s coding on the heat map will look like a clean gradient from white to green to red.

Figure 4: Heat Map Gradient Bar

When the reviewer’s coding does not match the gradient, the QC Heat Map will reveal this, and create a visual signal to check those documents to see if there are any coding mistakes. See the below heat map as an example.

Figure 5: QC Heat Map


In this QC Heat Map, Reviewer 6 has coded several documents with a high relevancy score as not relevant. In contrast, Reviewer 15 has coded several documents with a low relevancy score as relevant. The QC Heat Map also shows at a glance that the coding of these two reviewers is different from all other reviewers, which increases the likelihood that this discrepancy is due to a coding mistake likely arising from a misinterpretation of documents or a misunderstanding of review instructions.

The QC Heat Map is a way to capture an overall view of the quality of reviewer coding. Using the QC Heat Map, you can identify potential coding problems in a way that could not be captured through traditional keyword or tag searches used in most review QC checks.