Procurement Analytics
How can we provide more transparency into the government procurement process?
Appian’s government solution suite became an integral part of Appian strategy to bring the power of the platform to customers in a fast and easy to customize way. As agencies began adopting our government solutions, the company began to pivot to find new ideas that can continually help improve the lives of government contracting officers.
One of those pivots was ✨Procurement Analytics✨, an opportunity for customers to connect data across the government solution suite to draw insights on important factors like agency spend and process efficiency.
This team was going to be built differently. This team was about ⅓ the size of a typical government solutions team. The goal of this team was to build out beautiful, functional products that reflect a high level vision that could become the next breakthrough idea. We were meant to gather feedback and deliver our findings to see whether we should invest a full team to maintain a formal solution.
“We are the first solutions Spartans: light weight, fast, and resilient.”
Government organizations spend millions of dollars annually with relatively little insight and reporting on the goods and services that are being procured.
Our primary users for the spend analytics dashboard are contracting managers and employees within the agency's strategic sourcing group.
These users want to view a high level overview of their spend data, track their organizations progress towards spend goals, and identify/predict patterns of spend to determine workload distribution or actions that need to be taken.
Learning about how the government views and handles money was a huge part of trying to build a dashboard that would be meaningful for the people using it.
What pieces of data would trigger a contracting manager to say “Hey, this is a problem.” 🤔
When we first started the project, the product manager had a couple metrics they wanted us to focus on. This was used to iterate a base design that we would show to stakeholders for validation
Phase 1: MVP
With this basic set of metrics, the PM and I worked on on iterating on different layouts to emphasis one or more of the metrics.
We began to show stakeholders and customers that we had previous relationships with to see whether our idea had value. Each stakeholder had their own difficulties seeing the full picture.
General sentiment (summarized by me):
This isn’t bad, I can see how some people can find value in this*...but for me personally there’s something missing. It would be nice if we could have “XYZ”
*The most difficult part of this process was that feedback was always vague and lukewarm (not great but not bad)
With each meeting, we received a new piece of data to consider adding to the dashboard. From an MVP dashboard, came a Maximalist approach where we tried to include EVERYTHING.
Phase 2: Maximalism
This phase was consumed by iterations of different ways we can cram as much data into a screen (or multiple screens) as possible
When showing stakeholders our new approach, we had very similar feedback, but in the opposite direction.
General sentiment (summarized by me):
This isn’t bad, but not everything on this screen will be relevant for everyone.* We might be putting in a lot of effort adding this data considering some agencies might not care for it
*Again, equally as vague, couldn’t catch a break 😮💨
After this feedback, my PM and I decided to take a step back and rethink the way we approaching the problem. Our initial ask from executive stakeholders was to build a dashboard.
One dashboard.
But maybe one dashboard was not going to cut it....
Based off our feedback, we learned that every agency has different definitions and metrics to measure how they are being “efficient” with their spending.
Why not let every agency customize this dashboard to only include data they deem is important? 🤯
Phase 3:  Customizability
It didn’t seem like we were making progress defining a set of metric we believe was important. Regardless of the amount of research and conversations we had, we couldn’t define a view that could make everyone happy. This led to the idea that we should just let each agency define what was important to them from a set of metrics we could offer.
This customization ensured that the dashboard would be relevant to users and allowed us a model to scale upwards if we were to offer more metrics to add.
With customization also came the problem of, how do we ensure the dashboard tells a cohesive story regardless of the metrics users pick?
We needed to offer flexibility but also some constraints to guarantee that the dashboard will look great, offer meaningful insights, and will not be time consuming to set up.
As we learned in our domain research, good government spend relies on meeting goals (rather than just spending less as personal budgeting may imply).
Agencies must try to get as close to the annual budget number as possible to ensure their funds do not get the following year.
Our dashboard focuses on allowing agencies to set unique goals and track their progress towards them.
With this new approach, stakeholders were excited that this model could differentiate us from others in this market. Even the stakeholders that initially believed this dashboard was a vanity project saw the value it could provide
"Love this solution! I am impressed with the fresh take on procurement analytics"
Sanat Joshi
EVP Product and Solutions
"Procurement Analytics was just an idea at the start, with a lot of uncertainty and lack of clarity. Brian and Kimberly spent time following the practice of discovery & design to gather alignment, come up with options to present to stakeholders, gather feedback & iterate the design multiple times to get to a strong outcome. This was a first foray for us to deliver using Appian's native records and data fabric capabilities."
2 customers signed in the first year of release.
Check out our documentation
Working though ambiguity and lack of initial interest
This project was difficult to start as no one knew what they wanted from this dashboard and some stakeholders even questioned that value in pursuing this idea. Dashboards can have a reputation for being not actionable and a snazzy way to show off that we can do cool things. This was also a user group that we normally do not target in government solution suite. We had to make a lot of assumptions about what this managerial persona may care about.
Feedback session were also vague at times, neither good nor bad feedback was given, making it difficult to know where to pivot. With all of these we continued to persevere, push for answers, and continued to iterate until we hit something that was a great idea
Seeing a product through from ideation to release
My previous experience as a UX designer has always been working on isolated features of a product without much insight onto what happened to the feature after the design cycle is finished.
Sometimes the feature does not get released, sometimes it gets release but we never hear feedback about it again. This time i was involved and kept updated on the sales progress and joined customer calls to collect their feedback.
Working in a small, fast team
This style of working was new to me and those on my team. Traditionally our company has valued shipping out thoroughly tested and developed products. This caused us to move slower and could have equated in some wasted efforts if the products we release did not get traction in the market.
This new working style allowed us to test ideas, which mean I had to accept that we were not always going to have the answers. Actually, most of the time we did not have the answers. This team was meant to test hypothesis, not create something perfect. I learned to really enjoy this aspect of my work.