How data self-service has
changed the way we work.
When it comes to technological disruption, the Recruitment industry still has a long (but exciting) path ahead. There have been some attempts in changing this through various Applicant Tracking Systems (ATSs), Software as a Service (SaaS) or in-house developed products, but there has not been much truly game-changing technology — apart from, maybe, LinkedIn back in the early 2000s.
This is a pity as there is a huge amount of data lying around that could be used to make the whole process faster, more efficient, less biased and generally better for everyone involved: imagine being able to find your dream job within days, rather than months, of searching. At The Big Search, we have been working on being tech-enabled for several years now. Our Engineering team has developed an in-house software, Navis, aimed specifically for Agency recruitment. Last year, we embarked on yet another new journey: the journey to become a truly data driven company. One year on and I am excited to say that we made significant progress. There is still a long path ahead of us, but we have learned some interesting lessons along the way which I will share in a new series of articles, this being the first. My sole aim in writing these articles is to motivate and guide other Service based companies, particularly small start ups or agencies, to embark on a similar journey. Smart data utilization can be achieved with the simplest resources.
The journey to become data driven.
No matter how smart you are, if you don’t have reliable information to base your decisions on, your decisions will always be subpar.
“I always wanted TBS to be data driven. Can you look into how we can do it?” asked our CEO Learco Finck in 2020. This was the question that started it all. And even though it might sound like a well-defined task, we, the Operations and Engineering teams, were immediately faced with the complexity of such goal. What does it mean to be data driven for a Recruitment agency? What is realistic within our context? As a service company with 100 employees, we could not afford a large Analytics team with BI analysts, statisticians and ML engineers. Also, what problems are we trying to solve with data? Or in short, why even bother with data? What value for the business can it unlock?
These questions were critical to answer, as based on them we would select our tools and strategy. There was no budget for building an Analytics team at the moment, so we needed to aim for something that would be possible with our existing resources. But this was limited and we risked not being able to deliver significant business value, jeopardising further investment into this area. After some consideration we decided to follow a strategy that we internally nicknamed “Data Self-Service”.
Why we needed to change?
Before I explain what “Data Self-Service” is, I want to first discuss the reasons that led us to selecting this approach. When building a datadriven company, it is extremely important to understand the reasons behind this goal.
For context, TBS is not a SaaS company with a headcount of over 60% engineers, nor are we a large corporation with a seemingly unlimited budget. We are an 100-people Recruitment agency, where the closest most people got to “data” or “analytics” was an Excel sheet, sent via an email (not that there’s anything inherently bad about an Excel sheet, but it is not quite enough).
We were facing constraints like:
The pressure to receive information on time. Recruitment is a fastpaced business. It is important to be able to have information immediately when you need to make a decision, for example on a client call. It is no use receiving a report two days (or a week) after you ask for it, because by then it can already be too late. Furthermore, as a fast-growing startup, we need to react quickly to changing situations, such as during the COVID-19 pandemic. Waiting a week to make a decision is just not an option.
We had our own internal software that could be a great source of data. Our teams used an internal software for their daily work, which could have been a great data source. But this source of data was only reliable if it was utilized properly and if the data entry quality was good (more on that in a later article).
We had a very small team and budget. We started with one parttime engineer setting up the data infrastructure, and 1 BI analyst and architect.
Most TBS employees never used data in a systematic way before. This meant there was a general distrust and worry when it came to data. Instead of excitement, data raised anxiety. Analysing this situation, it became clear that we needed a solution that enabled users to get access to data in real time, so that they can make decisions without having to wait days or weeks for certain information.
Additionally, we needed to minimise the involvement of BI analysts. If each data request required the involvement of an analyst, there would be no time left for the analyst to do anything else (such as building more complex predictive models).
Lastly, it was clear that building a data-driven company will be much more than just building the data pipeline, data lake and reports — we also had to build a data-driven culture (I will cover this in a future article). Thus, Data Self-Service was born.
So, what is Data Self-Service?
Data Self-Service is a user-friendly system where business users can find the up-to-date information, insights and high-quality data they need, without requiring the involvement of the Analytical or Engineering team.
The basic idea behind our Data Self-Service is similar to the selfservice checkout at a grocery store. Instead of requiring the store clerk (analyst) to process every single shopping request (information request), the self-service checkout enables the shopper (business user) to get the food (information) themselves. This enables the store clerk (analyst) to spend their time on higher-value tasks, such as adding more products (information) to be available to the users.
How we built our Data Self-Service.
Our Data Self-Service is based on the 80/20 rule. While there are many pieces of information needed by various stakeholders to make better decisions, 80% of stakeholders ask the same 20% of questions. Since certain questions were appearing over and over again (and we could justify the business need), we put resources into building a report that could answer them without the involvement of the Data team.
Our goals were to build:
A single source of truth. No more multiple, inconsistent Excel spreadsheets. Instead we would create one place where the data is always up to date and consistent.
Quality, accurate data. It was vital that anyone whose actions convert into datapoints would be trained accordingly, so that the reports can be accurate.
A user-friendly interface. It had to be flexible enough for nontechnical users to find, filter and investigate the information they needed.
To achieve the first goal, we built a data lake to which we integrated key data from the software we already used. The data refreshes every 2 hours, providing a reliable and up-to-date resource for us to run any analysis or reports. While this goal requires some Engineering resources, it can be built and managed by a relatively small team (in our case, one Engineer is enough, while still being able to carry out other tasks).
For the second goal, we had to identify the connection between actions and datapoints. We also ran training sessions with internal users so they understood how to enter data correctly — and why this was important. Using our own internal software made this easier, as we could influence both the datapoints we collected and also the quality of data. Still, making sure that this is consistently done across the whole business took time.
For the final goal, we found several off-the-shelf solutions on offer: Tableau, Power BI and Google Data Studio. We decided to go with Power BI, as it combined strong tools for analysts to develop meaningful reports, while also a simple-enough user interface for business users. Additionally, the business user interface can be set up as “Read Only”, meaning that there was no risk of users messing up the reports or the underlying data. This was very important, as most users were anxious at first to use this data out of fear they might mess something up. Making it clear to them that they would have to outsmart Microsoft engineers to do that helped in combating this anxiety.
The power of one simple table.
The above explanation might have sounded a bit theoretical, so let’s jump to a specific example.
Before we started using Data Self-Service at TBS, each week our Commercial team would gather to review ongoing projects and discusses capacity for new ones. To make good decisions, it was important to know exactly which projects, or searches, already had a placement or an offer, or might be ending soon. Each consultant was responsible with filling this information to a shared table on Notion every Monday morning.
As you can expect, this solution was far from perfect. Often people forgot, or were sick, or on holiday. One member of the Commercial team thus had to check with each consultant and manually log this information in, before the Monday meeting. The data was often incomplete or quickly out of date. If a hire was made on a Tuesday, no one would know until next week.
Instead, we decided to use the Offers and Hires information already logged as part of our recruitment process in our ATS (Navis). We integrated this data with Power BI and built a simple report that showed all candidates with offers and all hired candidates. The result in Power BI was just a simple table. It did not require any fancy visuals, statistical analysis or even Machine Learning (as some companies might try to sell you).
The value this table delivered was immediate:
No more duplicate manual data entry. We only need to enter the data once in Navis and it appears everywhere it is needed.
The data is always up to date. So the information can be used anytime in the week.
The data is well structured. You can also easily see historical information.
Simple, yet effective. Since this centralised information is available to the whole organisation, anyone at TBS can use it. This created some unexpected new benefits. We saw researchers checking this data to see if a search similar to their own is coming to an end, so that they can contact the candidates regarding the role they are working on. This increased the efficiency of our sourcing. We later transformed this “Hires” table into a full-fledged Track Record, which also replaced the manually updated table in Notion.
In the last 90 days, our report on Power BI has had over 550 views — that is more than 5 views a day in a 100 person company. And the greatest thing is that every single person in the company can view it, completely on their own, without the need to get a data analyst involved.
This is just the beginning.
A simple table in Power BI might not seem like much. Especially in comparison to various Machine Learning models that other companies might be boasting about. Yet, as this article hopefully shows, simple models can add a lot of value and are simple to make. On top of that, what I described in this article was just the beginning. This is the first step on the journey towards predictive models and other smart tools. From my experience working in a service based company like TBS, it is very important for the first step to be simple and provide value directly, as it helps build a better data culture that opens up the path to more complex solutions in the future.