OS Distro

Helping users reduce the number of false positives detected in their security analysis

Project Overview

Team: UX Designer, Front End Developer

Role: UX Designer

Project: The start of this project came about through a user request. “You are showing us things that we have already patched,” they said. “If you could account for that in your analysis, this would reduce false positives for us by a lot.” It was a lightbulb moment.

Duration: 2 weeks

Tools: Figma, Slack, UserZoomGo

The Problem

At Finite State, a Series-B app-sec startup one of the major areas for improvement for our application, according to user research, was reducing false-positives in the analysis. 

1.) Users do it themselves. When our analysis produces tons of false-positives, the user has to go through their findings and differentiate between actual false-positives and those that were marked false-positive (but are not false-positives within the customer context).

Solution and Impact

We designed and built a feature that allowed users to set a patch level for significant Operating Systems (referred to here as OS Distros) such as Linux. This meant if a Linux component in our analysis had a patch applied to it we would be able to tell and mark it as a false positive for the user so they wouldn’t have to. 

30%

The decrease in total false-positives for a scan.

10%

The increase in total finding resolution for a scan.

Scaled Approach

We started out small with a minimal frontend implementation that would provide context for the new information that we were pulling in for our users, but we planned to scale our approach to allow users to have more control over the patch levels themselves in the future.

Auto-Detection

The first iteration of the feature auto-detected and auto-applied the OS patch data. In order to provide users insights into how we had improved their analysis, I designed a widget that gave them the following information:

  • Name of the patch

  • Type of patch

  • Name of the operating system

  • Major version

  • Names of patched findings

By mapping the user journey around patch detection, I understood that a widget with this data was a necessary addition:

Auto-Detection User Journey

  1. User uploads binary for analysis and analysis takes place:

  2. User navigates to their findings to see what needs to be resolved, mitigated, reported on, etc. 

    Problem: After navigating to a finding, how would a user know that we had resolved their false-positives (even if they saw an increase in findings marked false-positive)? It would be valuable to the user to call this information out.

Improved User Journey

  1. User initiates an upload and analysis takes place.

  2. User navigates to their findings.

  3. User can toggle drawer to see info detail and the number and type of patches that were applied to a finding to see what needs to be resolved, mitigated, reported on, etc. 

Next steps: More User Control

The next iteration of the feature would allow users to apply the patch level themselves or, if they chose, remove the patch level that had been auto-applied for them.

This set the stage for the prototype phase: I generated key values that a user could set in the UI to “apply” those patch levels. 

With the feature designed, a group of three engineers and I worked together to build it. Before launching the feature for all of our customers, we tested it with the client who requested it. They provided feedback that we would fold into our solution before final launch. One piece of feedback was: “This is very, very useful.” It was safe to say we’d delivered exactly what they were looking for.

Previous
Previous

API Tokens | B2B SaaS Feature

Next
Next

TIA | iOS Native App