Note: This post was originally published on our company site, which has since been taken offline.

The last half of 2019 brought some big changes for us at Digital Solutions: We had a team member leave, and we also saw substantial growth. That meant needing to hire four new folks, taking our team of six to ten. As a small, bootstrapped company without a dedicated hiring manager, we have to prioritize our hiring and make room for it in our daily work; at the same time, our size gives us a lot of freedom to experiment with our hiring practices. This article is a report about our recent experience hiring, what we changed along the way, and how we applied the same principles we use for helping customers solve business problems to our own hiring process.

While we had a well-defined hiring process in place, we had not made a hire in well over a year. During that time we had also undergone some leadership and cultural changes, and we weren’t sure whether the existing process was still a good fit for us. As we prepared to advertise and start interviewing, we were thinking about outcomes and looking at the hiring process the same way we would a customer product; for us this meant collaboration, identifying and prioritizing our goals, forming hypotheses about how to achieve them, and then iterating.

Collaboration

One of our baseline assumptions was that we wanted the entire team more involved in hiring–both in participating in the hiring process, as well as helping define what that process should look like. In the past, hiring was an activity primarily for management and some senior engineers; more of the team would be brought in for a final, “round table” interview, but often a decision had already been made.

By discussing our existing process together, we came up with a few ground rules–things that could serve as anchors, even if we experimented with the details. First, we wanted more transparency into resumé reviews. Second, we wanted at least two team members involved on all the interview steps. Third, we wanted the final decision to be made by the people who would be working most closely with the new hires.

Resumé Reviews

One of the biggest “gates” in our previous hiring process had been resume reviews; no one other than senior management was involved in the initial review; at best, team members would get handed a single resumé for a sanity check. We wanted to open up the review pipeline to everyone on the team, and give everyone an opportunity to champion candidates and move them forward through the hiring process. One of the themes that came up over and over again was that having different perspectives makes for a better hiring process, and we felt that should apply to resumé reviews as well.

Pairing on Interviews

We wanted every interview phase to be conducted with a pair of engineers; several folks on the team had interviewed this way before, and we believed it would have several advantages. It gave us another perspective for every candidate, which is valuable; two people will catch things that one might miss, and they can also check each others’ biases. In addition, three-person interviews tend to be more conversational–an interview is a very unnatural environment, and having three people helps alleviate that. Finally, pairing allowed everyone to participate, even if they had never done an interview before; it was a great opportunity to grow skills on our team.

Pushing Decisions to the Team

As much as possible, we wanted hiring decisions to be made as close to the work as possible. While we all participated in the interview process, if we knew that a candidate would be working with specific people, we tried to make sure those people were involved. In cases where we had to make a decision between multiple good candidates, the final decision was made by the folks who would be working with them every day.

Prioritizing Goals

There are a huge number of factors that can contribute to or detract from healthy and effective teams. While skills and ability are important, so is the ability to get along with coworkers. Diversity clearly makes for stronger teams, and we also need to plan for the future, by training up folks who are early in their careers. Then there are budgetary and salary considerations, for both the company and the candidate. Balancing all of these things can be challenging, especially for a small company!

We knew that we couldn’t tackle everything at once, and that it would take time and energy to get to where we wanted to be. Once again we applied our normal working principles to the problem; using Mike Rother’s Improvement Kata, we thought about what the overall direction for the company should be, our current conditions, what our target state should be after this round of hiring, and what we might do to get there.

Direction

We first asked ourselves, where do we want to be? This is a huge question, and one that we discuss regularly as a team. We knew that we wanted a collaborative, diverse team, focused on solving problems and not just slinging code. We were highly focused on continuous improvement, and looked at hiring as an opportunity to grow skills across the team; we wanted candidates who would add to our culture, not just fit into where it was today.

Current Condition

Going into hiring, we tried to take stock of where we were, and how we’d gotten there. In terms of collaboration, we had found a way of working that we were pretty happy with; we’d adopted XP and mob programming across multiple cross-functional teams, and were aggressive about limiting work in progress and tackling problems together. We wanted to continue in that vein, and so we were looking for folks who were excited and energized by working in that way.

In terms of diversity, we had a few different factors to consider. Through a combination of promotion and attrition, we had ended up with a cluster of mid-to-high level engineers; we had a gap in both technical leadership and folks earlier in their careers. We had a 50–50 split of women and men, but this was due in large part to attrition; less than a year prior, we were at 25–75. And when it came to ethnicity, our organization looked the same as most of the tech industry: very white.

Target Condition

Starting with those things in mind, we could identify some target conditions for this round of hiring. It was easy to come up with a lot of goals for our organization; again, we used techniques from our daily work to “story map” a list of objectives, and then prioritize them.

Ultimately, we decided to focus on three goals for this round: Hiring at least one “senior” engineer, at least one “junior” engineer, and maintaining our gender balance at 50%, plus or minus one person. We had already been successful in both hiring and growing some great technical leads, so we were pretty confident about filling the “senior” role; we felt that our biggest challenge would be attracting underrepresented candidates and folks early in their career, and that emerged as our highest priority.

Experiments

Once we had identified our top priorities, we talked about how to achieve them. The folks we were looking to hire had historically been very scarce in our candidate pools, so we did a lot of reading and brainstormed ways to approach that problem. We also thought about how to conduct our interviews in a way that would give us feedback about whether what we were trying was working, and allow us to make changes as needed. Eventually we settled on a few hypotheses, and some basic testing strategies.

Our Initial Hypotheses

We focused our initial efforts around three hypotheses: We thought that writing an inclusive job posting, committing to interview equal numbers of women and men, and making sure the interviews themselves were a positive experience for our candidates would help us attract people we wanted to work with, and who would make our teams and organization better and more resilient.

An Inclusive Job Posting means More People will Apply

Based on our goals and our experience with tech demographics, we knew that we were targeting people who might feel apprehensive about applying. Because of this, we really wanted to make sure that our job posting was inclusive and encouraging; we hypothesized that one of the biggest obstacles to finding the candidates we wanted was getting them to apply in the first place, and that we could help remove that obstacle by crafting a posting that spoke to our culture and was welcoming to potential candidates.

Interviewing More Women means Hiring More Women

We put a lot of thought and did a lot of reading about the best way to reach the candidates we wanted. Martin Fowler’s great article was especially influential; as he points out, it’s possible to build as diverse a team as you like, regardless of the distribution in your candidate pool–it may just take more time. Based on this idea, we had a simple hypothesis: If we interviewed more women, we would hire more women.

One of the things we discussed right from the beginning, was the willingness to be patient; especially for a company as small as us, finding the right people was much more important than filling positions quickly. We weren’t in a hurry, and so we decided to interview in cohorts, with approximately equal numbers of men and women. In practice, that would mean leaving our job posting up for a long time, looking at a lot of resumés, and waiting until our top six or eight candidates were evenly split between men and women.

A Positive Interview Process means More Great Hires

The last thing that we wanted to experiment with was how we structured our interviews; we wanted them to be low-pressure, encouraging, and personable, and we hoped that candidates would have a positive experience regardless of the outcome. We had a plan to help get underrepresented candidates into the interview process; by focusing on making the experience a positive one, we thought we could help them feel comfortable joining our team, and make the hires we wanted.

We discussed several approaches to help make our interview process less stressful for our candidates; first, we emphasized that interview questions are a tool to help us learn about each other, not a gate or “gotcha.” Rather than tricky code challenges or whiteboard algorithms, we opted for discussions and dialog around architectural and delivery principles, along with collaborative coding exercises in a mob.

We also wanted to make sure that candidates were comfortable with their interviewers; we planned to give all our candidates a chance to talk to someone on the team in a similar position–we wanted female candidates to be able to talk with other women on our team, and to give everyone the opportunity to discuss the job with someone who was working at a similar level.

Overall we felt good about our initial hypotheses, but we also recognized that they were a starting point. As we talked with more candidates, we would get a better feel for what was working and what wasn’t, and make changes to our process as we learned.

Iterating on the Hiring Process

Approaching our hiring process the same way we would product work, we took an iterative approach, and tested along the way. We weren’t sure what the end result would look like, but we had a few ideas that we could get started with.

We decided to try interviewing in cohorts of 10-12 candidates, for several different reasons. We were a small team, trying to balance hiring with our day-to-day work, and working through a single, small group of candidates at a time felt more manageable. As mentioned above, we could afford to take our time, and we thought cohorts might help ensure that we had a good mix of candidates. Finally, a cohort approach lent itself to iterating and testing–each cohort could serve as a small experiment, and give us an opportunity to reflect and improve the process.

It took us from two to four weeks to move a single group all the way through the process, from the time we had received enough qualified resumés, to the time we were prepared to make offers; once we started interviewing the first group, things went a little faster, since we were still getting new resumés during the interview process. At the end of the process, the team would meet together and hold a retrospective, to discuss our approach and look for improvements. Often we would find patterns by comparing the team’s experiences and observations, and we could also brainstorm better ways to test and validate our process.

Testing and Validation

We tried to validate our hiring process the same way we would a product idea, and we took some cues from user research; the “north star” that helped guide us in validation was to assume that we had no idea if our process was effective or not, and ask ourselves how we would know if it was working.

In a normal interview flow, each step in the process helps gather information, and can serve as a decision point to narrow down the candidate pool. If the process is effective, we will eventually converge to a small list of candidates that are a good fit. But if we don’t know whether the steps in our process are effective, how can we test them? One technique that we used to evaluate our process was to “open the throttle” on our interview phases.

To do this, we interviewed candidates as normal, and recorded our thoughts as we usually would. But rather than using any particular phase as a decision point, we promoted candidates through every phase of the process–even if we felt a session didn’t go well, we would move the candidate forward, to make sure we weren’t missing something.

At the end of the process we would still make a decision, and then we reviewed the notes from each phase. If they were in line with the final decision, that was an indication that the phase was effective. If they differed, it was a signal that we needed to look deeper at how we were conducting that phase of the interview. We had more than a few cases where the final decision didn’t line up with the impression we had of a candidate earlier in the interview, and this allowed us to fine-tune our process, look for biases, and helped avoid both false positives and false negatives.

We also asked candidates about their experience, and tried to keep the dialog open, whether we had a place for them or not. If we didn’t feel that a candidate was a good fit, we made sure to give them feedback about why; sometimes a candidate was lacking skills that we needed, and we would point them to learning resources and talk a bit about why we valued those skills. Other times we had great candidates who just didn’t fit our immediate need, and we offered to reach out to them directly if we had an opening in the future. We tried to give all our candidates feedback on how we felt the interview went, and we asked them for feedback about our process; based on that feedback, we could adjust and improve the process for the next group of candidates.

What We Learned

We went through three cohorts of candidates, and in each iteration, we found something we could improve. Working through our first cohort, we discovered right away that some of the language in our communications weren’t clear, especially around our TDD exercise, and we made several changes in our email communications. We also heard a lot of the same questions from candidates, and added some additional detail to the job posting itself to help answer them–more detail about benefits, and calling out some other advantages, like our dedicated education budget and learning time. We also learned very quickly just how lopsided our candidate pipeline was; it was easy to find male candidates that met our criteria, but it took much longer for enough qualified women to apply to fill out the cohort. In the end, we missed out on some great candidates who weren’t willing to wait–including some women! In order to help alleviate this problem, we decided to try decreasing our cohort size.

In the second cohort, we adjusted the language again in our communications; we continued to see candidates submitting their TDD exercise without “showing their work,” and we tried to be very explicit about what we were looking for, and why. We also changed the way we were reviewing resumés; we had started off pairing or mobbing on resumé reviews, but we all agreed that it didn’t feel “good.” We had already opened up the candidate pipeline to the whole team, and so we decided to try out a model where anyone on the team could promote any resumé into the interview process, without prescribing any particular methodology for the review process. Finally, we made an adjustment to the overall process itself; as candidates moved through the stages of the interview, they had a designated “guide,” a single person who was with them through the entire process.

Going into the third cohort, we were concerned that the interview process was starting off with an impersonal vibe; the first phase of our interview was a short code exercise to demonstrate TDD, with only an email as a contact point. We decided to try starting folks off with a phone conversation instead. We were explicit that this was not necessarily a screen; we were more interested in giving candidates a touch point within our team, and helping us think of candidates as real people, rather than just a name on a resumé. This felt like an immediate improvement, and it also gave candidates a chance to learn more about us; in a few cases we were able to mutually recognize that we weren’t a match and move on, having only invested a few minutes.

Lessons for Next Time

After several months we had made some fantastic hires, and done a reasonable job of meeting our three goals. But while we were very happy with the folks joining our team, we also recognized some opportunities to improve our hiring process further. While we did technically meet our goal of keeping gender balance around 50%, we ended up with a slightly less balanced team–going from 50–50 to 40–60, and three quarters of our new hires were men. We felt that the changes to our hiring process were somewhat helpful in reaching the right candidates, but there is clearly still a tremendous amount of work for us to do around team diversity.

The first place where we want to experiment in the future is how we advertise our job openings. We had tried advertising on a couple of different job boards in the past, and had eventually pulled the trigger on a contract with Stack Overflow Careers. This was an expensive option for us, since it required a year-long contract, and we weren’t entirely happy with the response we got. While we did get lots of good candidate responses, we also got a lot of noise. Most importantly, we didn’t see many responses from under-represented candidates, especially at senior levels. This wasn’t entirely a surprise, but going into 2020, we’ll probably use that money to focus on more specific advertising–perhaps by travelling to career fairs or conferences.

Another change we will try in the future is reaching out on twitter and other social media for help getting the word out. As a small and unknown company, we were very hesitant–almost embarrassed–to tag people directly, and ask for amplification. There are a number of well-known folks who have been inspirational to us, both personally and as an organization, and we like to think that we share many of the same values; in the future, we may try asking them for help.

Finally, we will likely experiment with some less traditional ways of connecting with potential candidates. A couple months after we wrapped up our hiring round, Charity Majors tweeted about a very interesting approach that Honeycomb was taking: encouraging candidates from under-represented groups to reach out to her directly, without needing to apply through a formal process. We thought this was a great idea, and one we’ll be excited to try the next time we hire.

Overall the experimental approach worked well for us, even if not everything we tried led to success; part of scientific thinking is not feeling defeated when a hypothesis doesn’t turn out the way you expect. Instead, we hope to treat those moments as opportunities to learn and reflect about our processes, our goals, and the biases or misconceptions in our own thinking.

Inspiration and Bibliography

As a small and relatively young company, we couldn’t have landed where we did without standing on the shoulders of giants; there are a number of people who have done great work in building and maintaining healthy organizations. We have and continue to learn from their experience, both in terms of specific writing they’ve done around hiring, as well as the work they put in on social media, participating in the often difficult conversations about how to make tech a better place for anyone to work.

Our job posting was certainly influenced by Laurie Voss, especially the samples from their internal hiring documents.

Marco Rogers’ interview gave us some great suggestions for structuring our own interview process, and helped validate some things we were already doing. His conversations on twitter are always enlightening, whether he’s talking tech, diversity, or culture.

We’ve sometimes felt that the folks at Honeycomb have been peeking into our slack; every time we’ve been wrestling with a tough question or wondering if we’re taking a crazy approach, it seems that Charity Majors or Liz Fong-Jones are writing about the same things. Charity’s post on “The Enterprise of Hiring” especially resonated with us.

We’ve been paying attention to Martin Fowler’s technical work for a long time, but he has some fantastic posts on diversity have been just as helpful; “DiversityMediocrityIllusion” is a particularly great framing.

Finally, David Heinemeier Hansson and the rest of the Basecamp team have been an inspiration to us as we grow our own organization. Basecamp doesn’t hire very often, but they did this year, and we were able to get some great insights and validation from their writing about the process.

comments powered by Disqus