At Bytecubed, I designed a proposal evaluation workflow in collaboration with one other UX designer, and the Product, Development, & Data Science teams to standardize source selection across the Department of Defense.
User Research, Interface & Motion Graphics Design, High-Fidelity Prototyping
The United States Department of Defense (DOD) is the country's largest government agency and its largest employer. The DOD is divided into over 15 command offices, each comprising a separate stakeholder with unique needs.
We were asked to examine the federal acquisition program through which the different DOD components publish RFPs, evaluate proposals, and award contracts.
Almost every component has a unique process for topic development & source selection making measuring the program's success difficult and tedious.
To improve the procurement process, we are developing a unified web platform where all federal agencies can release topics, evaluate proposals, & generate reports in one place, customizing the workflow to address individual needs.
From early meetings with program managers across the DOD, it was clear that there were three basic needs: (1) to write and publish topics, (2) to receive & review proposals, (3) to collect & make available accurate, real-time data.
To capture the entire process, we split the project into distinct, use-based work-streams – Topic Development, Source Selection, & Reporting – each with it's own Product & Development teams. As a part of the UX team I split my time between each module.
User Journey Mapping
Focusing on the proposal evaluation piece, my first goal was to figure out who exactly is involved with source selection & what they are doing.
Meeting with DOD employees across the source selection process, I facilitated User Journey Mapping exercises to define the user flow and tease out specific goals & pain points.
At the beginning of these sessions, we identified the distinct steps in the user flow (the green index cards in the picture below). Then, we discussed the actions, questions, successes, pain points, and opportunities (the yellow post-it notes) that are associated with each step.
The biggest takeaway from the user journey mapping sessions was that although source selection goals are the same across the DOD, the precise personas involved vary greatly.
We had initially defined user personas by position title, such as Component Program Manager & Proposal Evaluator. However, the more we met with source selection staff across the DOD, the more we realized that these titles may be the same from one component to the next but the exact duties may differ depending on component size & structure.
To address this, I split our user personas into 9 functional roles defined by the most basic task a user has to perform at any step of the process, allowing us to accommodate differences in user personas from component to component.
Some of the pain points that were expressed:
I devised a two-paned view to allow proposal reviewers to read through proposals and evaluate them simultaneously.
I split the evaluation workflow into 5 sections, Proposal Content, Technical Merit, Key Personnel, Commercialization, & Selection Recommendation, to avoid overloading the user & help guide them through the evaluation.
After consulting with the data science & development teams, we realized that the PDF data was not reliably parsable and agreed that an in-app PDF viewer would be a huge technical lift out of scope for the MVP.
In my next iteration, I pivoted to designing a tool that could be re-sized and viewed next to any PDF viewer application.
Throughout the research and design process, I continually pushed for more and more frequent user testing.
Getting even lo-fidelity designs in front of users enabled us to validate incremental changes in direction & identify potential problems early on.
However, although we have regularly scheduled client engagement meetings, we weren't always able to carve out enough time for primary users to test every iteration.
Even though we couldn't always find "real" users to participate in user testing, I wanted to continue testing as much as possible.
Recruiting test users from ByteCubed coworkers on other projects, I was able to test small chunks for functionality & discoverability.
Drawing from material design & angular material components, I built out my sketches in Adobe XD and animated interactions with Principle.
Focusing the primary action in a narrow section of the viewport, I wanted to prioritize and conserve vertical working space.
I swapped horizontal steppers for vertical, bringing the section headings in-line with the evaluation form and allowing the user to breeze through the evaluation workflow within one page.
I included information that we could accurately parse in an expanding card, only occupying the viewport when needed by the user.
After testing the vertical steppers users noted that they liked "how all the pieces are on one screen" and that "it's easy to switch between sections."
Speaking with primary users, I learned that proposal evaluators have access to a variety of different computer set-ups, from laptops to multi-monitor desktops.
I designed the interface to be responsive, easily adapting to a range of screen sizes while still usable in full screen.
Throughout testing, users noted that "the labels are not clear on all icons," indicating that "adding descriptions to buttons would make things much easier."
To design straightforward & discoverable UI for users of all engagement levels, I added tool tips for unlabeled buttons and modal windows to allow for more descriptive instructions.
83% of the source selection staff I met with was between 46 and 60 years old.
While older generations are becoming more and more comfortable navigating through digital space, the platform would need to cater to digital natives & non-natives alike.
I created a motion graphic to help guide users of all computer literacy levels to take full advantage of the proposal evaluation feature.
We are currently on track to release the platform MVP late August 2018. With the first release, I plan to conduct more formalized user testing sessions to continue improving on features we have already built out.
Looking ahead to the next release, we still have a number of features that will enable different components to customize the proposal evaluation workflow, including incorporating customized evaluation questions, and criteria weighting.