Auditing AI: When can we trust algorithms? 

  • Dr. Adrian Byrne, 27/09/2022

I currently work on auditing AI algorithms to improve explainability and detect bias. While this may sound quite specific, the overarching area of my research is often referred to as trustworthy AI or ethical AI or responsible AI. I have found this research area to be vast with lots to get to grip with including deciphering whether or not different terms mean the same thing! I find it is critical to create a research plan around tangible research questions otherwise it is quite easy to become overwhelmed in this research area.

My current work centres around consuming a lot of literature, talks, presentations and podcasts. This enables me to keep abreast of the latest thinking and thought leaders in my research area. I also make my own contribution by creating presentations, articles and speaking at various events. I have discovered that my research area is very topical at the moment especially thanks to the proposed EU legislation on AI. Many different stakeholders have an interest in this area and I have enjoyed the separate challenges of creating content for both mainstream and peer-review audiences.

It's an exciting time to be working in this area as the future is yet to be shaped when it comes to trustworthy AI so there are many opportunities for researchers to play their part in this space. I propose to draw upon my social statistics background to develop a protocol for bias detection that involves building an explainable model. Indeed, my background in social statistics and economics helped me move into this current research space. As a quantitative social scientist, I am not your typical data/computer scientist and therefore I bring different experiences and skills to bear in this area. In essence, I have benefited from the growing chorus of voices calling for a greater multidiscipline approach to tackle the area of trustworthy/ethical/responsible AI.

Another exciting aspect to my work is that my time is split between an academic research institution specialising in AI (CeADAR) and a company that provides AI products and services (Idiro Analytics). This aspect affords me the dual opportunity of appreciating life in academia and life in industry. The former is filled with peer-review papers, conferences and grant writing while the latter is filled with marketing, business events and product/service development. While sounding different there is quite a bit of overlap between the two camps and much of what I do is complementary to both.

The biggest challenge I'm currently facing within my research area is building a translational bridge between blue-sky thinking/talking and grassroots action, i.e. how do we put trustworthy AI into practice? How do we trust the audit process for an algorithm? How do we trust the explanations that flow from the algorithm? How do we trust the data used by the algorithm? Can an algorithm be both fair and unbiased? I share these questions to promote the idea that perhaps AI is deserving of these questions and does not deserve a "free pass". We don't permit pharmaceutical drugs for public consumption until they have passed a rigorous controlled trial process so why don't we adopt a similar mindset for algorithms? If these sort of questions stimulate your mind as they do mine then please get in touch and let's take the conversation/research forward!

Dr. Adrian Byrne - TCD START - #researchMATTERS

Dr Adrian Byrne 

Dr Adrian Byrne is a Marie Skłodowska-Curie Career-FIT PLUS Fellow at CeADAR, Ireland's centre for applied AI, and also Lead Researcher of the AI Ethics Centre at Idiro Analytics. With qualifications and work experience in economics and statistics, Dr Byrne is a quantitative social scientist with expertise in multilevel modelling investigating inequalities at different levels and examining how these inequalities interact. In his current role, Dr Byrne has been awarded EU Horizon 2020 funding via Enterprise Ireland to undertake a research project entitled “Algorithmic auditing for improving model explainability and detecting bias using sociodemographic data”. This project is jointly supported by CeADAR and Idiro Analytics.

Follow Adrian on:

Twitter Linkedin