Baseball is a game of tradition. Evaluating player talent falls on the shoulders of a team’s scouting team who observes one or more games and submits a written report of their observations along with a prediction of likely future performance. These are rarely completely objective, relying instead on the scout’s perception of the player’s abilities and the scout’s experience with other players at that position. While a scout’s evaluation of a player was done by observation, it is more often a subjective, rather than objective, assessment of the player’s potential. For example, one scout who evaluated Hall of Fame pitcher Don Sutton wrote, “…knows how to pitch. Winner!” and “…have a good feeling about him.”

Download a PDF of this white paper.

In his 2003 book, Moneyball: The Art of Winning an Unfair Game, Michael Lewis chronicles how Billy Beane, then general manager of the Oakland A’s, turned the baseball world on its head by eschewing this conventional wisdom and adopting a more analytical, results-driven approach to assembling a winning baseball team. Beane’s approach took the A’s to the the American League playoffs with a team salary that was significantly lower than that of perennial contenders the New York Yankees. As a small market team, the Oakland A’s couldn’t compete with teams like the Yankees for player salaries, so it had to find creative ways to attract high-performing talent at a fraction of the salary cost.

At the time, baseball’s traditionalists often looked to statistics such as batting average, stolen bases, and runs batten in as leading indicators of a player’s offensive potential. Yet, rigorous analysis proved that other, easily measured outcomes such as on-base percentage and slugging percentage were better indicators of offensive potential. As a result, Beane’s premise was that a more analytical approach to player aptitude, as measured by performance outcomes rather than gut feel, would enable him to assemble a team of contenders within his budget constraints. In what was arguably one of the first practical business applications of “big data analysis,” Beane parlayed this approach into a winning formula in Oakland which has since become a model for many other major league teams who have also hired fulltime moneyball-style analysts.

What does all this have to do with staffing a winning contact center? There are significant parallels that make comparisons hard to ignore: 1) agent hiring decisions tend to be more subjective than based on an exhaustive analytical evaluation of probable outcomes; 2) contact center performance is regularly measured and rigorously analyzed at the individual contributor level; and 3) many of these measures are closely tied to overall business performance. All of these factors point to considering a moneyball approach to agent hiring.

“Have a good feeling about him.”

Like baseball, recruiting talent for the contact center has often centered on gut feel. Sure, recruiters evaluate resumes and companies will often run candidates through a series of assessments to ascertain what skills applicants possess, but the final decision who to move forward in the hiring process is often subjective – “this person sounds good on the phone.”

The current agent hiring model tends to be geared more toward filling seats in the short term – specifically the training and nesting periods – than it is for long-term longevity and performance. While companies spend substantial time, effort and money on behavior, cognitive, personality and skills assessments, these are not always reliable forward-looking indicators of long-term success when viewed through the lens of business performance.

The moneyball approach to contact center hiring.

The moneyball approach to contact center hiring presumes a shift in conventional wisdom from the current, largely subjective, process to one that is heavy in analytics as measured by desired performance outcomes. Like the baseball version, moneyball for contact centers uses big data analysis to identify job candidates who are likely to deliver these desirable outcomes based on observed capabilities during the pre-hire phase of their employment.

Know the end goal.

Agent hiring is normally a sequential process that begins with a job description and a position requisition. Job descriptions tend to be fairly generic and pretty common between organizations. It’s not unusual to see things like, “must have excellent customer service (or sales, or collections) skills,” and “must be proficient with a computer and the internet” on the typical job description. Why? Because these attributes are easy to measure with currently available assessments or tests.

You are unlikely to see things like, “must deliver 80% or better FCR” or “must achieve 90% or better CSAT” in a job description because these things are very difficult to measure with the pre-hire tools that are commonly available. Yet, these are the very things that new hires are expected to deliver.

The first step in implementing a moneyball approach to agent hiring is to identify and prioritize the key metrics that drive business performance. These metrics may be different for individual groups – first call resolution (FCR) and Net Promoter Score (NPS) for customer service positions, and sales conversion and revenue per sale for sales positions, for example. To get the greatest impact, these should be measured down to the individual agent level and reported on a regular basis.

Knowing the goals ahead of time, and identifying the forward-looking indicators of excellent performance means making better hiring decisions, increased agent performance throughout their lifecycle and reduced attrition.

Measure pre-hire performance.

Much of what an agent candidate does before he or she is hired is used only to substantiate the hiring decision, but is not quantified. Often times, these assessments are administered only because it’s “policy” to do so, without any real knowledge as to what results they are intended to predict. Even tests and assessments that result in candidate “scores” are rarely used to truly measure performance potential.

Typing and screen navigation tests are common assessments administered to agent candidates, and they are certainly useful in determining candidates’ technical skills in these areas. They may be useful in predicting relative proficiency with technology, which may be a predictor of metrics such as average handle time, but may not do much in the way of predicting performance against key customer service metrics.

Assigning a score to each interview, test or assessment presented to a candidate during the pre-hire phase of his or her employment forms the foundation by which moneyball style analysis can be performed. Recruiters (and for that matter, other stakeholders who participate in the interview process) can use a quality assurance-style form that’s already a part of the daily repertoire of supervisors and coaches to score interviews. Tests and assessments are often auto-scored with a numeric value or grade. Collecting and consolidating these provide a comprehensive view of each candidate’s qualifications and can be saved as part of their “permanent record.”

Find leading performance indicators.

Having identified the key performance metrics that drive business outcomes and collected a comprehensive view of each new hire’s pre-hire performance, the moneyball model brings them together to look for positive and negative correlations between the two. A positive correlation suggests which pre-hire assessments are good leading indicators of desirable business outcomes while a negative correlation indicates which assessments are poor leading indicators.

This approach is valuable from two perspectives. The first, and most meaningful, is to identify which agent candidates who are more likely to produce the business outcomes desired by the organization. The second is to identify which assessments provide value in predicting future performance. Companies should consider discontinuing those assessments that do not correlate to desired operational performance.

In this example, the scores derived from recruiter and stakeholder scoring of candidate interview results is a good leading indicator of FCR performance, but not a good leading indicator of average sale per call. Conversely, the screen navigation test is not necessarily an indicator of good performance across any of the relevant metrics in this example. Based on these results, the organization might consider discontinuing its use, as it really has no value in predicting performance.

The process of finding leading indicators is one that takes some time to get started, but is one that also improves over time since it is the organization’s own observed performance metrics that are used to identify relevant correlations. Advanced computing techniques such as predictive analytics and machine learning make this process reasonably easy to implement and manage.

Moneyball has proven itself as an effective model for evaluating talent based on desired business outcomes. It has become the “go to” approach in professional sports and the lessons learned apply more broadly to organizations who hire to drive specific, measureable business outcomes. In the context of the contact center, it is already in use with a number of organizations. Those who take this approach report significant improvement in retention, performance, and reduced hiring costs.

Anatomy of a baseball scouting report.

To finish with the baseball theme here, the following is an excellent resource to find original scouting reports of some of baseball’s stars: http://scouts.baseballhall.org/reportanatomy. It is interesting to see how scouts evaluated these players early in their careers. Some were prescient; others were way off the mark. When reading these, consider when they were written. Some will be politically incorrect by today’s standards.

Other Customer Results