The collection of people analytics has traces over a century old, aiming to apply scientific methods to behavioral data.
However, it wasn’t until the advent of technological tools such as artificial intelligence that the true impact of people analytics could be felt, particularly in the world of Human Resources.
The automation of these large datasets has greatly improved HR processes, and moving forward, can be the building blocks to a more equitable workplace.
However, while these tools have been successfully used for hiring, measuring productivity, promoting workers, and more, cracks in this technology have simultaneously been brought to light.
For instance, Amazon had to toss out a resume screening platform created by its own engineers after it was found to be biased against women. And that’s just the tip of the iceberg — automation has been found time and time again to be programmed with inherent bias.
However, there are still ways engineers can address these issues and make people analytics equitable for all.
This starts with recognizing historical bias in old and existing data. Since people analytics tools typically take data spanning multiple years, its algorithm may need to be adjusted to understand the context and bias of a given time.
Another issue are the ways in which these algorithms measure success.
For example, take the use of undergraduate GPAs as a measure of intelligence for new hiring prospects. What these numbers don’t tell recruiters is how those with lower-income backgrounds may have had to work while attending school, potentially hindering their overall GPA.
Identifying discrepancies between what a leader wants to measure (comprehension skills) opposed to what they are actually measuring (GPA) will be essential in building unbiased people analytics tools.