Query Builder

Using filters to build a segment of resellers in Alaska or Hawaii activated after June 1, 2019.

 

Background

This feature was born out of the need for our users (large companies white labeling our product) to be able to segment their customers (small businesses) on our marketplace.

We had received multiple requests from major clients expressing their need to segment customers. Each one had a very different use case in mind, which made this challenging.

Here are some examples of segmentation we knew our customers were interested in:

 
 

Research

Due the importance of flexibility in designing this feature, we knew we wanted it to be dynamic. The customer groups should change to accurately reflect customer as they were added and removed from a marketplace.

This led me to look at products that had user segmenting or reporting features. Of the products I researched, the ones that allowed their users to create dynamic groupings using query statements provided by far the most flexibility.

 
 

Each query builder we researched had distinct pros and cons and each one was customized for the specific use case of the product. There was no silver bullet solution.

A few noteworthy observations:

  • Heap does a nice job of organizing their dropdowns to be easily scanable.

  • Segment’s UI is extremely clean and uncluttered. They do a great job of making the query statement feel like a real sentence.

  • Salesforce’s Report Center has by far the most powerful logic engine of any query builder I cam across. They allow you to connect clauses with formulas (similar to Excel).

 
 

Customer Engagement

I got the opportunity to get customer feedback very early in the design process by presenting at our annual customer conference called Engage. This allowed me to get Segments in front of about 50 people and learn about their use cases for the feature.

 
Presenting+Segments+at+Engage
 

Engineering Challenges

Once we had a vision for Segments that had been affirmed and refined with customer feedback, we dug into the technical details of the feature. There were a number of limitations on the backend that required the designs to pivot. This feedback-iteration cycle was made easier by the positive relationships formed on our team.

The primary roadblock our team faced was the challenge of pulling the customer data being drawn on by the filters. Every time you form a query statement (i.e. Company size > 1,000), a database is pulled. Each filter required digging through an intricate and uncharted web of databases. Some pieces of data was still tied up in our not-so-beloved monolith of code and other data was scattered throughout various micro services. This data debacle slowed down the development process and led to our greatest challenge.

 
I designed a hoodie for the marketplace team with a “Destroy the Monolith” emblem across the back. People wear it everyday! That’s how much we didn’t like the monolith.

I designed a hoodie for the marketplace team with a “Destroy the Monolith” emblem across the back. People wear it everyday! That’s how much we didn’t like the monolith.

I have very supportive teammates.

I have very supportive teammates.

 

Due to the database complications, displaying query results would take 5+ minutes. Waiting minutes for results was simply not an acceptable user experience. This meant that the user would build their query statement, but not be able to view all of the users who matched or even know if they had made a mistake in forming their query. This was an issue because it made it impossible for our users to feel confident that they were going to get the results they expected. After many weeks of brainstorming different possible solutions, we were forced to surrender and accept a query builder without a results preview.

Although this was a disappointment, the query builder itself was still a powerful tool that would function as promised and it was still something our users wanted. We forged on hoping that one day our databases would be optimized enough to display results in a reasonable amount of time.

As a compromise, I designed a simple email validation tool that checks runs the query on one user at a time. It gives our users confidence without having to check the query on tens of thousands of users.

 
 
This email validation tool was a design compromise that allowed us to complete the rest of the feature in time.

This email validation tool was a design compromise that allowed us to complete the rest of the feature in time.

Expanding our Design System

I started designing the query with existing components in our design system, but quickly realized that they just weren’t going to cut it visually or functionally. I decided to add new components to our design system that would help give our platform UI a more modern and clean look.

 
 

According to the atomic design methodology, the query builder is an organism made up of reusable molecules (such as a button). Both the organism and its molecules have to be flexible to accommodate future design applications.

Each new component was reviewed by the UX team, reviewed by the frontend developers, and documented. There needed to be a rational case for how a component solved a problem that existing components did not.

I found that it can be quite challenging to convince developers to do more work in the short term for the long term sake of the design system. I am very grateful to the team for putting in the effort to build out the the four components that were approved.

 
Visual documentation of one of the four new components I added to our design system. A searchable dropdown with categories that group the selectable items.

Visual documentation of one of the four new components I added to our design system. A searchable dropdown with categories that group the selectable items.

 

User Testing

I performed two formal rounds of user testing internally and many informal feedback sessions with my co-workers. After the qualitative testing, I always sent my users a simple 10 question survey to generate a System Usability Scale (SUS) score. This allowed me to measure and track the usability of the feature as I iterated on it and ensure that I met our team’s agreed upon baseline SUS score of 65. I received averages of 73 and 69 on each testing round.

Round 1: Qualitative Usability Testing with a Static Prototype

Testing a hi-fi prototype

Testing a hi-fi prototype

I tested my high-fidelity mock ups with a group of co-worker volunteers. I found trends in their feedback and came to conclusions about what they found confusing. Because I could not properly test the interactive parts of the feature with this prototype, I focused the session on copy, UI, and overall comprehension of the feature.

Most of my testers were intently focused on the copy and had a lot of feedback around specific language and naming. I made a lot of modifications to the copy as a result, most significantly, I shifted away from using the word “query” and replaced it with “filters.”

 

Round 2: Card Sorting & Qualitative Usability Testing with a Coded MVP

Card sorting to improve feature naming

Card sorting to improve feature naming

I performed a card sorting exercise to help solidify some of the names we used in the feature.

I provided them with two descriptions and a selection of seven name cards. I asked them to match the names with one of the descriptions and then remove all but one name. All four participants left the same two cards! I was thrilled to have been proven wrong.

As a result, I changed names from "Dynamic Segment” to “Build with filters” and from “Target Specific Companies” to “Build manually.”

This was followed another usability test, but this time using an interactive prototype in a testing environment. I found that this round of users cared much less about copy— they went with their gut and clicked around rather than stoping and observing the page first. I was very pleased that the logic of building a query came intuitively to people. One user said, “this reminds me of writing formulas in Excel, but easier”.