Archive for Product

How to log Feature Requests

Below is an internal team post I wrote to help us at Close.io do a better job of capturing customer’s feature requests in a way that leads to better product development outcomes.

I’m sharing this publicly because I believe this advice can help other startups and SaaS product companies as well.

When a customer request for a new feature come in, it’s easy to not log it at all (“we’ve heard that one before”) or to log it in a suboptimal way. Here are a few details on how to best capture customer needs and why it matters.

The typical, but unhelpful, way

The most common way feature requests are logged is in a “light” format like this:

  • “john@example.com wants feature FOO – [link to ticket]”

While this does provide a list of people to notify if we ever do launch a feature that we happen to call “FOO”, it falls short in many other ways.

This is typically very unhelpful for Product Development or for getting a customer’s needs solved because:

  • Many people have different ideas about what this feature specifically could be.
  • There are different ways a feature could work and this doesn’t give any insight into which way would be best.
  • Frequently there’s a different, better way that their underlying need (use case) could be met instead.

Logging requests in this “light” format may seem at least like a good way to build a notification list, but in reality leads us to building and notifying people about something that turns out not to best solve their core problems.

Similarly, it may seem that this at least gives us a “votes” system to prioritize Product decisions, but in reality there isn’t enough detailed signal in the votes to design a feature we can be confident really helps most of them.

Caveat: Logging a feature request with “email address only” is still better than not logging any at all, because it at least does give us a list of people we can later contact for more information.

The better way

A feature request logging format that is 100 times more helpful is one that includes details of the use case. A use case should include:

  • Who is the user and company, and what’s the user’s role within the company?
  • What situation is causing the user to want this feature? (When did the need start?)
  • In their own words, what do they mean by “feature FOO”?
  • How, specifically, would they use this feature?
  • Really, why do they want this feature? What is the specific use case?
  • What is the business value they’re hoping to achieve because of this feature?

It all really boils down to asking “why?” and then logging in as much detail as possible, in the customer’s own words, the specifics about what they are actually trying to accomplish.

“In the customer’s own words” is really important because it’s easy for us to, in the moment, substitute our own current idea of a solution, when what’s important is logging their actual problem and goals. Having the customer’s own words helps us avoid bias.

How this helps

Demand based Product Development

Having concise but detailed use cases help direct us toward clarity when we try to validate whether a Product Proposal makes sense to move forward with. They help answer:

“Is there really a pattern of consistent use cases that are important enough AND that our specific idea of solution would be a GREAT solution to?”

When logging feature requests, it’s best to assume that, by default, no feature will get developed until there is a set of documented specific use cases with reasons/explanations on why it’s important to that customer and how they would use it.

From a Product Development perspective, it’s important that we pay most attention to the underlying need (demand) rather than customer’s ideas and requests for specific features (supply). (For more about this, there’s a great podcast on the subject).

Design Decisions

Even in times where there’s clear demand in a problem area and many requests for a particular feature, another factor is at play where detailed feature requests with use cases help tremendously.

Often, there are multiple very different ways a feature could work, but without understanding exactly what users are trying to achieve, it’s hard to know which way is better.

Most features which seem clear and straightforward on the surface can go in different directions once you really dig into the nitty gritty UI/UX or technical design. Being able to go back to the core use cases (specific real-world things our customers are trying to do) it helps us quickly and confidently choose a direction.

Who should do this

The Product team takes ultimate responsibility for fleshing out customer use cases and validating solutions with customers.

Anyone who talks with customers, however (especially Support, Success, and Sales), is in a unique position to already have many interactions where feature requests and customer problems come up. It is immensely helpful and valuable for moving the product forward in the right ways if these interactions result in feature requests logged with use cases. We love having the entire company championing customer needs and giving input into Product direction. This is the best way to do that.

At Close.io, the best place to log feature requests with uses cases is in our “Feature Requests & Customer Needs” Trello Board. If you’re ever inclined to write a Product Proposal, including a few curated use cases (in the customer’s own words) is useful to supporting the idea.


What do you think? Send me a tweet about how your team captures customer feature requests.

Add A Comment

Don’t punish old trials and former customers

A common pattern in SaaS apps is to allow a free trial period of 2 weeks or 1 month, and then to require a credit card to use it any longer.

Either out of curiosity or out of a genuine need for a tool that some SaaS service is offering, I will often sign up for a free trial soon after learning about it in order to check it out. For a variety of reasons by the time the free trial is up, I’m not ready to purchase.

It could be because I was just poking around. But more often it’s because I got too busy. Or my reason for signing up didn’t stay a high enough priority to be ready to purchase and fully implement some solution. Or maybe because the product just wasn’t far enough developed to satisfy what I was looking for.

What I find happening is that 3, 6, 12, or 18 months later I’ll find myself thinking about this tool. Perhaps the problem that led me to originally check out tools of the service has become more pressing than ever before. Or perhaps I’m fed up with another tool I chose, and am searching again for a better option. Or I’m hoping that the product has evolved more. Or whatever.

When logging back into your previously-created account, what you typically see is something like this:

Your free trial expired. Please enter your credit card to continue.

At this point, it’s far too easy to just close the tab. I’ve done it many times, even when I actually was in need (and willing to purchase) a tool in their category.

Savvy users will email the site’s sales or support team and can usually get a trial extended, but this often takes a few hours, which sucks. When a user gives you enough attention to want to check out your product right now, you should always take advantage of that. It’s too easy for that attention to get lost if you make the user wait.

Similarly some users will just use another email address, which is really bad for understanding your marketing funnel and metrics, and is a bad user experience overall. Plus, this may mean losing whatever progress was made on the first trial.

Let’s stop punishing our older trials.

I always thought this was a bad user experience, but I knew we were guilty of doing the same thing at Close.io. Not anymore. Now if you login to a trial that has been expired for long enough (i.e. you haven’t checked out the product in a long time), we give you a single click to get started again.

Screenshot 2015-08-04 14.50.22

The same applies for former customers. Once you’ve been around a while, it won’t be uncommon for an early adopter to have churned and then want to give you another shot a year later. We should welcome this!

It’s a super easy change to make and within minutes of pushing this change we already saw its effects pushed to our chat.

Screenshot 2015-08-07 16.53.17

Don’t treat old users worse than new trials!

Comments (10)

The last 20% before shipping

What makes a new feature or product update “done” versus what makes it “really done”? At  Close.io we developed our own process to answer this question, based on years of shipping new features for our sales communication platform. Today, we want to share this checklist with you.

As soon as a new feature you’ve built is running and working on your development server, there’s a strong temptation to think it’s “done”. You want to ship it. After all, the code is working, and you know many of your users would benefit immediately from the change.

It’s important to stop and ask yourself: What else should we do other than just making a new feature functional? How can I improve the experience for users or make this feature more accessible and maintainable?

We’ve learned that when you’ve only gotten a feature working, you are at most “80% done” and that the last 20% really makes all the difference. In fact, sometimes the “last 20%” of polishing a feature can take just as long to get right as the “first 80%”, however it also is what separates good from great.

Here’s our internal checklist that we use before launching significant new features or changes. Whether for launching a new reporting feature or migrating our database to a different cluster, we’ve found this list to be really helpful.

Performance

  • Is it fast?
  • Are the database queries optimized?
  • Will it scale for a large number of records?

Code quality

  • Is the code cleaned up, organized, and properly abstracted?
  • Is it well commented / documented? Assume someone else will have to maintain it.
  • Are there relevant unit tests?

Edge cases

  • Do you consider and properly handle edge cases and invalid inputs?
  • Did you test the first time experience / no-data-yet case?
  • Is it localized? Consider timezones for anything date specific, and unicode.
  • Did you test in all the supported browsers / platforms?
  • Have you tested for potential security vulnerabilities?

Polish

  • Does the UI look polished? Is it consistent or better than the rest of your application?
  • Does the UI react to various edge cases properly?
  • Is any written text as good as it can be? Did you check for typos?

Deployment

  • Does it need any special deployment process, and is the process well documented? Any data or schema migrations necessary?
  • Is everything backwards compatible? If not, did you communicate the change to users?
  • Should it be deployed/visible to employees only to dogfood/test for a while?

Communication

  • Are the relevant API docs complete?
  • Do any FAQ docs need to be updated or written?
  • Did you write a blog post announcement, if appropriate?

There you have it. Run through this list before shipping something new and you’re guaranteed to have a better launch.

Do you have anything to add to the list? Let me know!

Comments off

Solve multiple problems at once

Startup engineering teams face many decisions about what to build. At Close.io, many areas compete for the focus of our small engineering team. Customers often have one little thing they really need. Our team envisions the next big thing to move the product forward. There are poor UX workflows to optimize. We have an idea on how to grow our customer base faster. And of course there are always bugs to fix. The list is endless!

A small engineering team doesn’t have the time or resources to regularly improve every part of a product. It’s not uncommon for a section of an app, once launched, to remain untouched for a year or longer. We usually work to solve a problem or empower customers in a new way or fix a pain point, and then we move on to something else.

One reason I believe our super small team at Close.io has been successful is that we often solve multiple problems at once. When a feature needs to be built, we often expand the scope a bit to include other related problems or features that naturally go together with the first one.

Another way to phrase this idea is: rather than solving a problem, solve an entire class of problems.

While this advice may sound obvious, there’s enormous pressure to finish a project as quickly as possible. There’s always the next important feature, bug fix, or redesign from the roadmap to move on to. Shipping even the smallest version of a feature on time can be difficult enough already, since software schedule estimating is never easy. But there’s great value in not just shipping a feature or fix in the smallest form possible.

Application: Fixing Bugs

Let’s start with a simple real-world application: you notice a bug that needs fixing. After some investigation you figure out what the problem is and how to fix it. You fix it and maybe even add a unit test for this case. Time to move on, right?

NO! If you stop there, you’re making a crucial mistake.

At a minimum, you should try to figure out if the same bug exists anywhere else in the codebase. Often a single ack search is enough to find the same bug in many places. Next, consider if there might be other conceptually similar versions of this bug elsewhere. Ideally you’d also follow The Five Whys and discover how this bug got introduced, how it got past code review and QA, etc.

Again, this advice may seem obvious, but consider someone’s natural instinct when a user complains. These complaints usually come in the form of a vague problem (e.g. “it won’t let me change my email address”). First you figure out what the real bug is (e.g. “The form for changing email addresses doesn’t show a confirmation message”). Bugs usually come in very specific forms like this. It’s not uncommon for a programmer to simply fix the bug and move on. But it’s important to stop and consider if similar bugs may exist elsewhere (e.g. “how form confirmation messages work throughout every part of the app”).

It’s the sign of a mature programmer to ask “why” and consider preventing future bugs of a similar type.

Example: A broken URL on Close.io

I recently noticed a problem with a specific URL on our site was not working. I discovered the cause was that two Python view functions in our Flask app shared the same name, which just silently breaks one of them. My first instinct was to rename the broken view with a unique name, and move on.

But I remembered it wasn’t the first time this had happened and I recognized it likely wouldn’t be the last, so I thought about the problem more broadly. I knew a syntax issue like this should be detectable, so I spent some time setting up pylint and reviewing its results. Pylint uncovered another case of the same error, as well as other types of logical errors elsewhere, which I fixed. Finally, I added pylint to our continuous integration system to automatically detect any Python syntax issues in the future.

So rather than fixing the broken URL, I fixed all cases where URLs were broken for the same reason. I also found and fixed other unrelated instances of “detectable” syntax issues. And I also automated this process so that this entire class of issues can never happen again. Do you see how much more powerful this type of fixing can be?

Once you’ve discovered the specific causes of a bug, there’s no better time to find and fix other similar bugs. Even when the roadmap begs you to move on, the benefits of squashing related problems are even stronger:

  1. It’s good practice to fix bugs before writing other code (#5 in The Joel Test)
  2. If you can discover and fix bugs before more users experience and report them, you’re preventing user pain.
  3. You’ve already done the hard part of figuring out the specifics of the problem. If you don’t completely resolve it, you’re forcing a teammate or your future self to have to waste time relearning the same thing!

Don’t just fix bugs. Fix an entire class of bugs.

Application: Designing Features

The temptation to solve a single problem at once is even larger when it comes to features. Features are often a response for solving a user’s pain point, or an idea designed to empower your users in a new way. Naturally, the team is excited to ship as soon as possible.

Furthermore, a good product designer will optimize a feature to be as simple as possible for the specific workflow it’s designed for.

The problem is that over time, rather than designing one cohesive experience, you’ve glued a bunch of individual features together. If you’re only thinking about solving one specific problem at a time, you’re missing the bigger picture of how everything will fit together.

Software written in this way turns out to be super complex because hundreds of small problems were solved separately rather than a few big problems being solved elegantly.

Don’t design a single feature; always be designing for the bigger picture.

Example 1: Close.io Search & Filtering

An example where I think our team nailed this early was with search and filtering. From ElasticSales we knew that salespeople would want to slice and dice their leads in a million ways.

When starting Close.io it would have been understandable if we solved this initially by just slapping a couple of the most commonly requested filter options, like “Lead Status”.

However we knew that this was a narrow and short-term solution. It wouldn’t be enough to last and wouldn’t be enough to “wow” people. Quickly, power users would outgrow our simple filters and we would be forced to keep adding additional one-off filters and complexity. We’d have to keep redesigning as the number of filters grew and redesigning again for each new idea like exclusion filters or nested “OR” conditions. We would have started fast but slowed very quickly.

Instead, we designed a framework to solve the larger problem. We invented a search language and then UI to allow filtering by a very large number of useful sales attributes and combine them together with boolean and/or/not keywords. It took longer to do it this way than just adding a couple basic filters. But we established a paradigm of how searching and filtering worked in Close.io that has powered innumerable use cases our customers needed and has lasted 2+ years. Our customers rave about its power, and PandoDaily wrote about it.

I’m definitely not saying we had to build this feature to 100% completion from day 1 (and we didn’t – we still iterate on it today – and in many ways it’s very far from complete). But thinking through a scalable solution for this problem rather than slapping on a few quick filters has given us a big advantage. Having an end goal in mind allowed us build a version 1 that didn’t have to be thrown away when we built v2 and v3. We have ideas for what an amazing version 5 and 10 may look like, and we won’t have to start over – all because we planned ahead to solve search & filtering more broadly.

Example 2: Close.io Reporting

Some of our competitors have dozens of individual “reports”. They tack on a new report every few weeks because users always want more reporting. Close.io was really far behind in reporting but the thought of adding dozens of reports made us want to cry. So instead we built one super powerful charting tool (Explorer) that, in one fell swoop, allows you to visualize almost any attribute of your teams’s sales activity.

Example 3: Close.io Bulk Actions

We needed to build a way for users to “bulk delete” all their leads. Rather than building this alone, we designed a system that would work for not only Bulk Delete but also Bulk Edit and Bulk Email (two other features we knew we wanted to build). Because of designing for this, we were able to later launch the additional two features within a very short period of time. Coding the two additional features became much simpler and the UX for all bulk actions was considered together rather than tacked on without cohesion.

Architect your product to solve an entire class of problems at once.

If you don’t, you’ll end up with software that’s missing important features and users will quickly outgrow the one thing you helped them with. Or you’ll keep tacking on additions in a non-cohesive way which makes a complex product over time.

Said another way: it’s easier to end up with both successful users and a cohesive UI & UX if you solve and design for a few big problems rather than a bunch of individual little ones.

Application: Refactors

Technical debt can clearly become a big problem and slow development. But it almost never feels worth rewriting something just for the sake of code quality. The benefit of doing projects solely to “pay back” technical debt is hard to justify.

The best time to solve technical debt, refactor code, etc. is in the midst of making other changes to that part of the system. When you’re working on an improvement involving a problematic part of the codebase and you’re considering making bad code even worse… go ahead and take the extra time to refactor and improve it. There’s no better time to do so, since you’re already having to grok how it works and carefully test those parts related to your improvement.

Application: Redesigns

When redesigning how one part of your product works, consider how the rest of your product works. It may be easier to solve multiple problems that relate to each other all at once.

Example: Close.io Onboarding Process & Email Setup

We wanted to introduce a set of onboarding steps for new Close.io users. One step would be an easier way to connect your email account (for our 2-way email syncing to work) rather than having users do so later in Settings. What we did is build and launch a few features all at once:

  • Onboarding steps for new users
  • Simplify getting email account credentials by:
    • Auto-detecting your email service, IMAP/SMTP hostname, port, etc. when possible
    • Consolidating setup of incoming & outgoing email settings into one step
    • Use OAuth instead of passwords when a Gmail / Google Apps account is detected
  • Support for multiple email accounts & identities per user

Not all of these features were crucial for the main priority at the time, which was to improve onboarding and make it easier to setup email. But they made a lot of sense to build together, since they were interrelated. We would have had to redesign, recode, and retest the Email Settings page regardless so it was the perfect time to design it to support setting up multiple accounts.

Supporting multiple accounts is a valuable feature that we always planned on building. But if we hadn’t built it alongside these other features, it likely wouldn’t have become a big enough priority to get built on its own for quite some time. By building to solve multiple problems at once, we were able to do more, faster, than had we been trying to solve independent problems in serial.

Risks & Rewards

You may now be thinking, “Isn’t this scope creep, and isn’t scope creep a bad thing?”

Indeed, if you keep expanding the scope of your projects to solve more and more problems you will never ship or meet deadlines.

But I’m actually advocating for more planning. More deliberateness in your design decisions and planning. More hesitation before starting projects that only solve only one problem. Design your product with end goals in mind. Design code and processes with your future team in mind.

Expand a project scope opportunistically where it makes sense. Reschedule items onto the roadmap sooner if they are easier to build alongside whatever your current priority is. Often you won’t return to a problem for many months or even years. So if you can put an entire set of problems to rest all at once, do it even if it takes a bit longer.

The principles I’ve been talking about should help you make a much better product over time. When you solve one problem, it’s not that much harder to solve a bigger class of the problem.

The way to keep from turning this advice into scope creep is to slow down. Not slow down in the sense that your team & product shouldn’t be moving quickly. But slow down in the sense that you should do less, but better. Do fewer things, but more that have longterm impact. You can’t do this for everything, but try to do it for the important parts.

So the next time you design a feature, fix a bug, or otherwise try to improve your product, ask yourself, “Can I solve multiple problems at once?”

Comments (1)

Manage GitHub Issues milestones in Trello

In doing product management on an engineering-led project, GitHub Issues rock. The killer features are that it’s a) really simple, b) tightly integrated with code (you can reference/close issues via commit messages), and c) facilitates discussion of issues just like it does of code.

What GitHub Issues suck at is being able to get a high-level view, where you can see more than 30 issues at a time, and broken out by milestone or by person. (You can only filter to see issues for one milestone or one person, but not easily move multiple issues between them.)

I’d really like to see a Trello-style interface for managing GitHub Issues. Some very limited integrations exist, but what I’m looking for would let you quickly move issues around between milestones. This would help plan a product roadmap and be able to visualize what the upcoming milestones look like in one place.

Read the rest of this entry »

Comments (6)

Objective Process for Product Reviews

I watched Jeff Veen’s “Designing for Disaster” talk (below) and took away a couple of parts that I thought were really good. Some notes:

How to do Product Reviews (can be design, product, process, anything) — making an objective process out of something that is very subjective.

  • Optional attendance, but mandatory participation (keeps everyone focused)
  • Not a forum for expressing opinions
  • Rather, a place to solve problems.
  • Define in the beginning if session is supposed to be divergent or convergent.
    • Divergent –  I want as many ideas to solve this problem as possible – let’s talk about everything; brainstorming
    • Convergent – Evaluating feasibility, acknowledge constrains. Drive towards consensus.
Driven by Purpose
  • Measure momentum in days (weekly checkup of progress)
  • Measure projects in weeks (figure out pace, when we will go out with the next thing)
  • Measure priorities in months (“we’re going to focus on performance and distribution in Q2”)
  • Measure vision in years (“organize the world’s information and make it universally accessible”)

 

Comments off