Photo by Jo Szczepanska on Unsplash

Behind the Scenes on a B2B Product Discovery Effort

An example of a real-world discovery project. Despite the fact that it happened over a decade ago, the intent, process and outputs are an example of what we should be doing more of today.

Saeed Khan
18 min readAug 6, 2024

--

NOTE: I’ve made some edits to the original article based on feedback and questions I received after it was originally published. I’ve added some more details and descriptions to help explain the work.

Also, I want to clarify something which might be implied in this article. I use the word “requirements” a lot, often in quotes when referencing the output of the discovery effort. And while we did define a number of requirements, the goal of the effort was not to simply create a set of requirements to be implemented — i.e. a “big bang” release.

We wanted to understand the breadth of what might be needed by customers and feed that into whatever ongoing development plans were in play. This was all done in the context of a public enterprise software company with many different business units and dozens of products being marketed and sold worldwide, both directly and via partners. i.e. it was a complex organization with many competitors and challenges. In short, things were messy and would always be messy.

Contents

  • Background
  • Timeline
  • Session Structure
  • Sample Questions
  • Final Writeup
  • Example Findings
  • Example Requirements
  • What Happened After the Report was Published
  • Lessons Learned.
  • A Little Feedback Please
Horizontal bar

Recently, I came across some documents on an old backup drive, and they had some interesting data that I thought I’d share.

Before I get into them, I want to say that I regularly get asked for “real-world” product management examples (vs. hypothetical or theoretical ones). i.e. what really happens in practice vs. what we say in theory.

I’m going to try to share more of that going forward, so consider this as one example.

The documents I found were from some discovery work that I led back in 2010. The main document — the final “requirements” document, is dated June 2010. Yes, that was 14 years ago. I was working at an enterprise software company, leading product management efforts on one of the company’s core products.

And before you think “14 years ago? What is he smoking? The world has changed so much in 14 years, how could this even be relevant?”, let me say that yes, the world has changed, but fundamentally our jobs have not.

Getting deep market and customer insights is even MORE important today than it was then, BECAUSE building software today is SOOO much easier than it was then.

i.e. your software can be easily and quickly copied. The market/customer insights behind it, that feed into your product, your user experience, your go-to-market, and your strategy are where the real and compelling value lies.

The Background

So let me get to the “requirements” document and the discovery effort.

As I mentioned, I was working in an enterprise software company. We wanted to build operational workflow capabilities into a number of our products and wanted some level of consistency across them, even if different products (and their target users/personas) would have different needs.

i.e. cross-product understanding and consistency were the goal, not identical functionality. We decided it was more efficient to start with a single “more general” discovery effort on workflows, and then specialize down later as each product team got closer to implementing whatever it was they needed.

NOTE: We weren’t starting this work with no understanding of workflows. All product teams had some level of workflow capabilities, some rather rudimentary, and others much more established. And each team had some market input, customer requests and perhaps some basic discovery work completed with respect to workflows. But there wasn’t anything comprehensive or recent, and thus this initiative took place to get a more current and consistent understanding of customer worklflow needs.

Timeline

The timeline for the discovery effort was laid out in the document. It looked like this:

Feb 2010

  • Initial discussions across multiple product teams and business units to align on process and goals.
  • Some discussions about potential technology directions as different products were based on different tech stacks.

March 2010

  • Project planning workshop bringing all teams together.
  • Identified key areas of research. These were Workflow Design & Development, Scheduling, Execution, Alerting and Monitoring.
  • Identified priorities of each team, key questions and internal ranking (hypothesis based on internal understanding) of functionality that would be needed.
  • Identified key stakeholders across business units as well as the team that would conduct the discovery work.
  • Team consisted of me (leading the effort), UX lead, 2 other Product Managers(Data Quality and B2B), and 3 engineering leads. i.e. we had PM, UX, Eng — a product trio — all working together back in 2010!!
  • Identified potential customers we wanted to speak with and key contacts at those companies.
  • It took several weeks for the outreach and response to arrange customer meetings. Customers had other higher priority work they needed to do. (so that was no different than it is today) 🤷‍♂️
  • We wanted to speak with between 10–20 accounts. We got 12 to confirm.
  • All meetings were done remotely over Webex (Yeah Webex. Webex was all that back then. This is before Cisco let it languish into whatever remains of it today.)

April-May 2010

  • Created interview guides and exercises that would be used in the interviews.
  • Agreed on protocol for interviews. i.e. who will lead, how to interact etc. to maximize benefit to us and to customers.
  • In parallel, members of engineering conducted technical evaluations of a number of 3rd party products, tools, etc. to identify candidates that met known requirements.
  • As we identified requirements from our customer conversations, we kept the engineering teams informed and they fed that into the work they were doing.
  • Conducted our customer sessions. Spoke to some customers more than once. They were very engaged in the process. A few of them provided additional information, including internal process documentation to help us better understand their world.

June 2010

  • Analyzed the findings as a team, including key insights and major requirements.
  • Wrote up the findings and created a set of overall prioritized requirements based on the interviews.
  • Given that the research would be used by multiple products, each team/PM was responsible for defining the priorities for their products and identifying any additional needs their users might have.
  • Presented the findings back to internal stakeholders across the business units.
  • Each PM/team took the data, report and findings and went to work. Some of them did additional discovery work with additional customers to dig further into some of the findings that were particularly relevant to them.

Session Structure

Each customer call had a similar structure. We provided this information to customers in advance so they could prepare as needed and also include anyone they felt would make a strong contribution to the interview session.

  1. Introductions and Objectives
  2. Customer profile — People, Roles, Responsibilities
  3. Skill sets of team — Technical & Business
  4. Workflow Overview/Background
  5. Design & Development
  6. Workflow Examples
  7. Scheduling & Execution
  8. Alerting & Monitoring
  9. Prioritization Exercise
  10. Final discussion & Wrap up

Calls were between 2 and 2.5 hours in length. In some cases we broke the calls up over a couple of days as it was difficult to get people for that much time in one block. In one case, a customer spent about 3 hours with us. She was a real champion and was always happy to help us in discovery efforts. The length of these calls was one of the reasons it took weeks to get them scheduled.

Lesson for those doing this work. Always assume it will take longer to schedule and hold these interview sessions than you originally planned. Three to four weeks sounds like a lot of time to allot for interviews, until you take into account customer schedules, vacations, unexpected issues etc.

Sample Questions

The following are examples of the kinds of questions we looked to ask in our interviews. Note that these questions were a mix of questions from all 3 participants in the team — i.e. UX, Product Management and Engineering. I’ll leave you to guess, which ones came from which participants. 😃

Workflow Design and Development

  • Are there standard operating procedures that must be followed in taking business requirements and implementing them in IT? If so, please describe.
  • Who are the parties (teams, roles responsibilities) involved in the process?
  • Are there larger business processes that are mapped and decomposed into smaller technical “workflows” that are implemented? If so, what are these processes in general?
  • How are these “workflows” implemented? i.e describe the development/implementation process?
  • Who is responsible for this?
  • What is the technical background of these individuals?
  • How are these “workflows” tested?
  • What tools/products are used in creating these “workflows”?
  • How important is compliance with standards like BPMN, BPEL or workflow standards?

Execution

  • Who is responsible for executing these “workflows” in production environments?
  • What kind of scale/parallelism is required? ie. Simultaneous workflows?
  • Are there dependencies across workflows? How are those defined? Managed?
  • Is flow control — i.e. decisions within workflow logic to decide on subsequent tasks — used in workflows?
  • If so, how frequently?
  • What kinds of decisions are evaluated?
  • Are there more complex or granular decisions that are handled via scripting or external schedulers to route flow?
  • What drives the need for these kinds of flows?
  • What kind of error or exception handling is typically implemented or required?
  • What happens when a workflow fails?
  • Are there different levels of failure?
  • How is recovery handled? Are there different levels of recovery?

As you can see, these are a lot of questions, just for these two sections. We didn’t ask every interviewee every question, but we did go into many of these as we could in most calls.

Final Writeup

After conducting all the interviews, we worked to identify specific insights from the calls and agree on the key points and requirements. Using this as input, it was my job to write up the final report. I found it much easier to work solo on the report and sharing drafts with the team and others for feedback as the document came together.

I found at least 3 different drafts on my hard drive, that were dated over a period of 2–3 weeks. I’m pretty sure this write up was my primary focus at that time.

The table of contents of the report looked like this:

Executive Summary —p. 4

— Background
— Process
— Core Team
— Terminology

Project Overview — p. 8

— Key Questions
— — Design and Development
— — Scheduling
— — Execution
— — Alerting
— — Monitoring

— Prioritization Exercises

Key Findings — p. 11

— Design & Development
— Scheduling
— Execution
— Alerting
— Monitoring

Prioritization Exercise Results and Findings — p. 22
Prioritized Requirements List — p. 27
Additional Requirements Details — p. 28

— Priority 1 items
— Priority 2 items
— Priority 3 items

Appendix A — Customer Contacts and Profiles — p. 33
Appendix B — Initial Internal Workflow Prioritization — p.35
Appendix C — Key Data Quality Use Cases — p.37
Appendix D — Workflow Chaining B2B Use Cases — p.40
Appendix E— Human Task details from <customer> — p.42

Yeah…it was a 40+ page “requirements” document. But if you look at the sections, there was a LOT of context and background for the reader to understand what we did and how we got to the results we had.

The majority of the information was split between the Key Findings, Prioritization Exercise Results and Findings sections (15 pages) and the Additional Requirements Details sections.

NOTE: I worked on a number of significant discovery projects in my career. i.e. digging deep into some focus area and then writing a report and socializing the results internally. The writeups generally took a form like the above, providing LOTS of context and structure to help those who were not part of the discovery process to understand what we did and why. Being able to write up this kind of document is a critical skill for any PM doing detailed discovery work and it is one that is not readily taught to Product Managers. I was lucky. Back in 2004, I worked on my first significant discovery project. That effort was led by 2 formally trained and experienced UX researchers. I learned a LOT by working with them. Their way of working formed the basis for how I think about discovery and how I teach it to this day.

Example Findings

The following are a couple of the findings from the research. I’m sharing them here just to give examples of how the data was analyzed and shared. The goal was to give readers (esp. people like executives or others who were not involved in the research itself) a clear understanding of what we uncovered, who said what, and why it might be important.

NOTE: These findings are narratives from the qualitative research we did. They cover customer environments and intentions, customer problems and scenarios, but are NOT actual requirements. You can see how they allude to potential areas of additional discovery work IF we decided to invest work in any of these areas.

Scheduling

Virtually all customers use an external enterprise scheduler (Autosys, Control-M, Tivoli etc.) to manage Workflows. Smaller customers like <SmallPharmaCo> use a combination of our built-in scheduler and external schedulers.

There were a few clear patterns with those who used external schedulers.

  • The scheduler is an enterprise standard and is used for all (or a large majority) of jobs executed with our products or other vendor products
  • Our products are part of larger “enterprise” workflows and thus an enterprise scheduler is used to orchestrate the work
  • The limitations of our built-in scheduler eliminate it from use in most environments
  • When using our scheduler, the schedule objects are part of our workflows, and thus any schedule change in production is also considered a code change and thus must go through a change control / migration process which is time consuming. This is a disincentive to use the built-in scheduler

NOTE: This last point was important, as a design decision years earlier by our company led to an onerous process requirement for our customers. It was a design change we would look to address in the future if we built another scheduling tool.

On-demand (File Watch) Processing

Both <HealthCareCo> and <RegionalBank> process a lot of files which are ftp’d or provided to them by external and internal customers/partners. Both companies use of our product can be described as processing hubs, taking inbound files, performing the necessary integration/translation etc. and then delivering a file to a target system.

To enable this, heavy use of filewatch or equivalent capabilties are used. <RegionalBank> does this primarily with shell scripts which kick off Workflows when files arrive. <HealthCareCo> uses their enterprise scheduler to create filewatch jobs which then kick off full workflows to process the files.

In both cases, there is a need for lightweight “filewatch” jobs that sit and wait for files to arrive, before kicking off more intensive data processing workflows. <HealthCareCo> indicated that at least 1/3 of their schedule is comprised of these filewatch jobs via Autosys, but they’d like us to provide a solution to help them eliminate that if possible.

A simple workflow chain could eliminate this. A filewatch job calling a full workflow and passing the contents (or reference) of the file to the called workflow would be sufficient for both <HealthCareCo> and <RegionalBank>

Example Requirements

The specific “requirements” were about 7 or 8 pages, and were more descriptive than specification. These were based on feedback from the interviews and our understanding of potentially where we would need to invest product development resources.

NOTE: The list of requirements DIDN’T constitute a feature release or a “big bang” development effort. They were simply the aggregation of our findings represented as statements of need. Prioritization (i.e. P1, P2 etc.) was based on how frequently the item came up in discussion and our own assessment of importance. Additional evalutative discovery work would be needed if we decided to implement any of them.

Here’s a short list of some of the P1 and P2 items. Don’t worry if they don’t make sense. Some of the language is very specific to our products or customer environments.

Example P1 Requirements List

  • Support for multi-session capabilities in workflows
  • Thin-client workflow monitor
  • Support for external schedulers (see details below)
  • Ability to branch processing based on workflow data/metrics
  • Human Task in a Workflow (see details below)
  • Workflow chaining and communication (see details below)

Example P2/P3 Requirements List

  • Access to error code return values of external scripts
  • Alerting on long running workflows
  • Ability for users to subscribe to alerts (see details below)
  • Workflow communication across our products

The following are some of the details we provided for these requirements. This is to give you a better understanding of the level of detail of our findings and how we communicated them to others in the company. These were not at the level where they were ready to implement. They were more like “Epics” or something higher level than that.

Support for external schedulers

Every customer we spoke to, with the exception of <BigBank> and <MidSizeEnterprise> utilized external schedulers either exclusively or heavily. We must provide a CLI to enable initial customer adoption.

Additionally, the CLI must provide some form of backward compatibility with our existing command line tools so that customers are NOT required to rewrite the hundreds or thousands of scripts they currently have that call those tools to execute workflows.

The level of backwards compatibility can be discussed and defined as part of release planning.

Workflow chaining and communication

Chaining workflows requires the ability for one WF to call another WF and pass data to it. The called workflow can either be invoked on the call, or can be running and receive the data and process it.

The data passed to the workflow can be actual data that needs to be processed (e.g. a file) along with additional metadata or simply a reference to the file that needs processing (full filepath and filename) as well as the additional metadata.

See Appendix D— Workflow Chaining B2B Use Cases for more information and examples of this requirement.

Human Task in a Workflow

A human task is any unit of work that must be processed by people. A human task in the context of a workflow must have the following components:

  1. A notification/alert to the person (or people) who will execute the task
  2. An (optional) acknowledgement from the alert recipient that the task has started
  3. A notification or confirmation from the alert recipient that the task has been completed
  4. An (optional) means to direct the next step of the workflow

Note that between Step 2 and Step 3, the actual human task is completed.

How the person actually acknowledges and notifies the system of completion of the task, or dictates the next step of the workflow must be discussed. There are several possibilities including links in notification emails, custom interfaces or standard interfaces from the workflow tool.

See Appendix C— Key Data Quality Use Cases for more information and examples of this requirement from a Data Quality perspective.

Diagram of a simple “Human Task”. This was from a discussion slide in the discovery interviews.

Ability for users to subscribe to alerts

Given that a change in notification delivery (in the current model) involves a code change (and thus testing, promotion etc.), a more abstract model where users can subscribe to alerts/notifications as needed would benefit some customers.

<PharmaCustomer> indicated that business users want to be alerted on workflow success (or failure) during certain times of the day for some particular workflows. This kind of non-standard alerting either requires regular updates to the alerting recipients or a means for those users to address their own alerting needs using a general alerting framework.

Note that in the above examples, the requirements are general descriptions of the desired functionality. They are sufficient to convey the gist of the need across to other stakeholders, but definitely NOT sufficient for implementation. Additional detail — discovery work, feedback on mockups etc. — was needed to elaborate on these requirements for development.

Also, keep in mind that Engineering was part of this discovery effort so they understood what Product and UX understood, and heard the same stories etc. from the customers we interviewed. Thus there was no “handoff” or knowledge gap between Product Management and Engineering.

What Happened After the Report was Published

This is where things get really interesting. In June 2010, the discovery work for this project was essentially complete. We had completed our customer interviews, analyzed the information, detailed the findings, created a high-level prioritization of requirements and shared all of it with other teams and stakeholders. From my perspective, the effort was a success.

But the work was only just beginning. There were three core teams that participated in this effort. They were called Data Integration (DI) , Data Quality (DQ) and B2B. Even though I was leading this discovery effort, I was also a Product Manager in the DI team, which was, at the time, the biggest business unit of the three. i.e. we had the most customers and the most influence.

There were some internal politics at play, that had existed prior to this effort, primarily between the DI and DQ business units. I won’t get into that, but I will say that there was a view from the DQ team that the research was too heavily skewed towards the DI use cases and DI customers.

I disagreed with this because the DQ team had been part of the planning and execution of the discovery work right from the start and they were aware of the customers we were speaking with and the broad focus of our work. But regardless, the claim was made.

Having said that, the Human Task (described in the requirements above) and the DQ use cases (described in Appendix C) were of primary interest to the DQ team and they spent a lot of time digging into those specifically on their own. That made absolute sense, especially AFTER the work we had done that had shed some light on it. “Human Task” was listed as a Priority 1 requirement in the document.

In the end, the DQ team did a lot of customer research and design work and built out a fairly sophisticated human task workflow interface to manage data quality bad records. i.e. a very specific and meaningful use case for their customers and personas.

To be honest, I didn’t track what the B2B team did in regards to workflows. They were the smallest of the three teams and worked fairly autonomously.

As for Data Integration — the business unit I worked in — things got complicated. First off, the DI specific workflow efforts were to be lead by another Product Manager. I knew him well, but he had not been involved at all in the initial discovery project. He had read the final report and I answered any questions he had and then I went to work on another DI product.

There were a lot of moving parts in the DI business unit, and the path ahead — as is often the case in larger companies — wasn’t straight forward. The discovery work we did, fed into some of the plans for the DI business unit, but they were building a new platform and progress on that was slow.

It took a couple of years (yes, years) for that platform work to come to fruition and the workflow capabilities in that new platform were not super sophisticated at the start. At least that’s my recollection — it was a dozen years ago. 😄

The reason, I’m telling you all this, is to give you some flavour of what really happens in some companies. I could have stopped at: …and we finished the discovery work, wrote the report and shared the results with stakeholders etc. etc.

But that’s not really the end. The end of discovery work is what you actually do with what you learned, and in this case, each team went their own way, and the paths forward were quite distinct.

Even though we started with the goal of common understanding and consistency, we ended up in quite a different place. And that is fine. That is often how things work out. I still look back at that effort fondly. I recall the customer interactions, as well as working closely with UX and Engineering on a discovery project. We formed some close bonds and the alignment between Product, UX and Engineering at the end of the work was incredibly rewarding.

It’s too bad that we couldn’t carry that alignment forward into building what we had researched, but that’s a reality in business and Product Management. We were better off having done the work, regardless of the timing of the when the findings were implemented.

Lessons Learned

Looking back, if I were to do it all over again, but knowing what I know today, I wouldn’t change much, but I *might* do a few things differently.

  1. I probably would try to understand the political issues more explicitly. At worst, they could be a factor in undermining the effort. e.g. the statement from the DQ team about the focus of the effort. But on a more positive note, perhaps being aware of the potential of that political dynamic and then being more active in working averting it.
  2. I would probably have a pre-mortem, both at the intra-group level (DI, DQ, B2B) and at the discovery team level. I like pre-mortems. They’re not perfect or a panacea, but they are a good exercise to open minds to potential for failure and working to mitigate. I don’t think the political issues would surface in a pre-mortem, but that’s OK. I wouldn’t expect them to.
  3. I would get more clarity from leadership about what we going to happen with the output of the exercise. i.e. the fact that I was NOT the PM that would work on Workflows for the DI business unit was a surprise to me. I had assumed that they were asking me to lead the discovery effort because they wanted me to lead the implementation work as well. On one level, I actually didn’t mind NOT being part of the platform development effort — it was messy to say the least — but I had assumed I would be, and being caught off guard was not pleasant.

A Little Feedback Please

If you’ve read this far, thank you. I’d like some feedback on the article to make it better. It should take just 1 minute, but will really be valuable to me. Thanks in advance.

===> Click Here <===

--

--

Saeed Khan

Product Consultant. Contact me for help in building great products, processes and people. http://www.transformationlabs.io