There has been a lot of talk lately about unconscious bias in our society. Every person on this planet is biased in a way or another. You can be biased against gender, minorities, brands, people etc. You can have biased behaviour, biased opinions. You can pick up biases from the way you were brought up, the environment you grew up in, the experiences you had throughout life and so on.
We accept bias as an unavoidable feature of a human being, but should we accept it as part of any system, AI or not, that is impacting our lives?
One of the biggest sources of bias in an AI or Analytics solution is the data. The data is used to train the model (also sometimes called the learning process) and once trained, the model can then be used to make predictions. If your data is biased, either due to the data gathering process or biased sample of your population, your model is going to be biased as well. Some people might think that this is not so bad, maybe there are situations where bias can be expected or accepted. Personally, I believe bias should be top of mind when building such a model to ensure it will behave in the benefit of the users when released into the world.
For example, let's say a company wants to automate the first stage of a job application process, to automatically decide which applicants are going to move forward in the application process. They are gathering the data from the previous online application forms and extracting key words/phrases from the applicants' CVs. They are using the past decisions as yes/no labels. From this data, they build a model that can automatically process applications. Of course, this model can save a lot of money and time, but if the previous decisions were in any way biased (towards a specific gender or minority for example) then the model is going to be biased too. To ensure that the model is not biased, the data and the resulting model have to be carefully reviewed.
The great promise of AI is to revolutionise the world, hopefully to make it a better place. To change every industry, touch the life of every inhabitant of this planet. Improve healthcare, education, wellbeing, standards of living, make businesses more successful, make individuals more successful and fulfilled. But can any AI system achieve that if it is designed to be biased, either consciously or unconsciously? AI should aim to make the world more fair, not increase unfairness.
How would we achieve such a system? Is it by designing standards, regulations, detailed processes and methodologies? Or simply by ensuring that our teams are diverse enough? I don't have an answer, but I hope somebody does and if not, I hope we will work together to get an answer. If we do this right we can move closer to unbiased world.