YOU ARE AT:Data AnalyticsWhat is ‘algorithmic bias’, and why smart cities must act now

What is ‘algorithmic bias’, and why smart cities must act now

Data does not always tell the truth; machines lie. Algorithmic bias means fairness and equality, the ultimate promises made by technology to re-write the rulebook, remain relative.

“Data reflects the social, historical and political conditions in which it was created. Artificial intelligence systems ‘learn’ based on the data they are given. This, along with many other factors, can lead to biased, inaccurate, and unfair outcomes.”

So says the AI Now Institute, an interdisciplinary research centre based at New York University, established to probe the social implications of artificial intelligence (AI) .

There are many examples of algorithmic bias, revealing inadvertently the implicit values of humans at the controls of computer systems. Most notably, its insidious impacts have been made evident in search engine results and social media platforms.

New Scientist ably summarised five instances of algorithmic bias in an article in April, including the racial prejudice of algorithms used variously in the US justice system and law enforcement, gender prejudice in the results for CEO-searches on Google’s search engine, and a case of Facebooks translation software misunderstanding local idioms and leading to a wrongful arrest for inciting terrorism.

These cases need no further examination, but the basis of such micalculations must be considered. McKinsey & Company details three bias models in machine learning, as follow, with brief explanations included.

1 | Anchoring

“Machine learning promises to improve decision quality, due to the purported absence of human biases. Human decision makers might, for example, be prone to giving extra weight to their personal experiences. This is a form of bias known as anchoring, one of many that can affect business decisions.”

2 | Availability

“Availability bias is another. This is a mental shortcut (heuristic) by which people make familiar assumptions when faced with decisions. The assumptions will have served adequately in the past but could be unmerited in new situations.”

3 | Confirmation

“Confirmation bias is the tendency to select evidence that supports preconceived beliefs, while loss-aversion bias imposes undue conservatism on decision-making processes.”

The implications of algorithmic bias for smart cities, with the rise of data analytics and machine learning in simple administrative processes, should ring alarm bells.

Some tech firms are acting to fight algorithmic bias. IBM is about to release a giant dataset of annotations for over one milion images to inmprove understandiung of biasin facial analysis. It will make available a further 36,000 annotated images to provide a more diverse dataset for people to use in the evaluation of their technologies.

“This will specifically help algorithm designers to identify and address bias in their facial analysis systems. The first step in addressing bias is to know there is a bias — and that is what this dataset will enable,” said IBM in a statement.

But the perception is technology firms are not doing enough. “Biased algorithms are everywhere, and no one seems to care,” said MIT, just 12 months ago. The responsibility falls to – and should be seized by, anyway – civic authorities, handling citizen data.

New York City is showing the way. In May, it announced the creation of a data task force, the first of its kind, to oversee ‘automated decision systems’, and work to develop a process for reviewing the application of advanced analytics in pursuit of administrative duties.

“As data and technology become more central to the work of city government, the algorithms we use to aid decision making must be aligned with our goals and values,” said Mayor Bill de Blasio, describing the move as a first step towards “greater transparency and equity in our use of technology.”

The New Yorker said the move came after James Vacca, member of the New York City Council, noted potential bias in local policing, where his Bronx precinct was unable to explain “criteria and formula” behind its automated staffing decisions.

Vacca drafted a bill proposing city agencies that wished to use an automated system to apportion policing, penalties, or services should make the source code available to the public, and to scrutiny. The law replaces the disclosure requirement with a task force.

The move should be applauded, and copied everywhere. New York City is leading the smart city charge, and its ethical stance on data, and its application in pursuit of efficiencies and intelligence, should be scrutinised. The importance for cities to understand, manage and guard data cannot be understated.

It is arguable publicly elected authorities in cities, rightly commanding increasing power and influence as urbanisation continues, have the upper hand in the fight for the public trust, compared with both private enterprises, and notably technology companies, and central governments.

With increased automation of civic services, it is paramount data is treated independently, and not coloured with the bias of human history.

ABOUT AUTHOR

James Blackman
James Blackman
James Blackman has been writing about the technology and telecoms sectors for over a decade. He has edited and contributed to a number of European news outlets and trade titles. He has also worked at telecoms company Huawei, leading media activity for its devices business in Western Europe. He is based in London.