Let's be Fair (using AI)

October 26, 2019 | 4 minute read
John Featherly
Cloud Native Architect
Text Size 100%:

Like most children, when I was five years old I would sometimes complain to my father "that's not fair!". My brothers had more of something than I or one of any number of clear cut injustices or inequities a five year old faces. My father's usual response would be "life isn't fair". A straight, low key, matter of fact response. Frustrating as it may be to a five year old, the literal meaning of the response sets the framework for a deeper meaning of fairness. The narrower the context in which a judgement of fairness is made the more compromised the judgement becomes. For a five year old all contexts are quite narrow, pretty much the definition of being five years old.

AI & Fairness

Recent advances in AI technology paint a trajectory of growing automation capabilities in all facets of human endeavor. Bounded context applications such as fabrication, agriculture or Mars rovers have, within their context, no substantial fairness concerns. Unbounded context applications particularly in our social and economic systems like banking, employment and law enforcement have significant fairness concerns. There is an established acronym covering this topic, FATE for Fairness Accountability Transparency & Ethics.

The worst (least fair) thing we can do in the unbounded applications is claim the context the AI components are developed in as the complete context and exclude a judicial process capable of a broader re-evaluation. At some point in the future we may find a way to create AGI capable of the unbounded judicial role but for the time being it is up to humans. It is shortsighted to believe fairness can be addressed by "removing bias from the data", or "proper cleaning of the data" or adjusting the algorithms and data such that results match external policies. The data will always be biased as long as it is finite (which is a long time). The fair approach is to revisit and re-imagine the overall systems being automated and create integrated solutions where AI does what it does best and humans do what they do best.

exempli gratia

In order to get an idea of what's in play here let's take a look at an example, job recruiting.

Applicant screening has become a popular application of ML in recruiting systems. On the surface it sounds like an obvious win. Develop and train an ML model to score resumes against job descriptions. The fairness of this solution is problematic from inception in the areas of feature selection and the training datasets. Forbidding prejudicial features such as gender is a noble attempt at fairness but neural networks have a way of seeing everything in the data particularly second order or implications that may not be apparent to humans. A zip code feature can turn out to be masquerading as race feature. Applying EEO regulations to the training data and the algorithms to select at prescribed ratios for gender and race may feel like we are building a fair solution but it is a step in the wrong direction. We end up limiting and potentially narrowing the context of decisions, actually adding bias and finally promoting a local optimization as the best solution.

Stepping back, the real opportunity to use AI to improve fairness here is to recognize the typical lack of veracity in resumes and explore AI enabled methods to build a wider unsolicited picture of the candidate's skills, potential and interests. On the job description side, they are seldom more than a list of skill and experience requirements, often cliché ridden and shallow. AI technology could be used to build a broader picture of company objectives, where and what help is needed and long term career potential. There is a lot more work involved in collecting, managing and analyzing data but that is exactly where ML technology can help, enabling the approach. After the initial concerns of "is this a good candidate for the job?" and "is this a good job for the candidate?" additional concerns like regulatory government and corporate policies can be applied. There is no reason ML technology cannot be used in the policy stages. The tools are honestly helping apply policy and not unfairly intertwined with fitness selection.

Notice that this approach by going beyond traditional resumes and job descriptions broadens the applicant pool to candidates that are not looking. Re-imagine your HCM applicant screening system as an AI enabled recruiting firm.

To be Fair;

  • do not automate and optimize existing processes that are already marginal or just plain poor in order to claim "AI made it great"

  • revisit the fundamental objectives of the process

  • explore new approaches based on search and analysis of large amounts of data

  • explore new approaches that leverage semantic technology and knowledge engineering

  • mandate continuous improvement of the search and analysis algorithms

  • mandate continuous collection, cleaning and relating of the "large amounts of data"

  • attempt continuous improvement of the knowledge data (ontologies)

  • include a "general intelligence" judiciary in the system

  • foster humans involved in the process to be generally intelligent, to contribute what machines can't - imagination, creativity and broader context

Fairness in using AI is more of a journey than a destination. Emerging AI technology holds exciting promise for tools beneficial to our societies and our planet. Think about what we are doing and make the effort required. We're not five-year-olds.

John Featherly

Cloud Native Architect


Previous Post

HTTP Redirect Access Control feature using OCI Web Application Firewall

Tim Melander | 4 min read

Next Post


Deploying Oracle Analytics Cloud Remote Data Gateway in a Public Subnet

Dayne Carley | 7 min read