Exaud Xperts’ Talks: Alda Canito (Part I)
We’re happy to introduce Exaud’s new blog segment – Exaud Experts’ Talks. Get ready to meet really cool (😎) IT professionals with all sorts of backgrounds, experiences, and passionate about things other than technology.
We start off with Alda Canito. Alda is a teacher at ISEP, where some of our team members took their very own university courses, as well as a PhD candidate researching at GECAD. Besides being a teacher and a researcher, Alda is a talented artist and has been working on her own webcomic, ‘O Sarilho’, in the past few years. Get to know her in this two part interview!
You’re currently working as a teacher and researcher at Instituto Superior de Engenharia do Porto. Could you tell us a bit about both your research area and teaching field?
I teach Computational Principles in the Department of Informatics and I am also a PhD candidate researching at GECAD. The two topics are quite distinct. My teaching job involves me teaching one single first-year subject that is very introductory and includes topics such as information representation, command line for linux and shell scripting. My research area, on the other hand, involves Intelligent Systems and Artificial Intelligence. I particularly focus on knowledge representation and reasoning using ontologies, which is concerned with how we can use logic to describe and ascribe meaning to the data.
What are the biggest challenges we currently face in Intelligent Systems?
Oh no, that is too much of a broad question! I’m going to pull the conversation into the topics I enjoy discussing and say that nowadays we have a problem with the way Artificial Intelligence is perceived by the masses, and our communication strategies are not helping.
This was the problem 50 years ago and it’s the same problem now: what the public expects is not what is being achieved, because there’s a miscommunication problem. Sure, there have been incredible advances recently, but a lot of them come not because new technologies have been developed – a lot of the algorithms current being employed in machine learning and data mining solutions have been developed decades ago – but because there has been a massive leap in accessible hardware, and we have volumes of data like never before.
The expressions “machine learning” and “data mining” are very much in vogue recently, and it’s always interesting to see students being extremely hyped about learning how to use them, only to find out it’s just Statistics Rebranded™. That is not intelligent. That is not ascribing meaning to the data; humans are still the ones doing that task and – most importantly, – interpreting both the inputs and outputs of these algorithms. And I believe the important question is here: who interprets what? Because that is still very much a human task. Who decides which data gets to be used, how it gets to be labeled, how it is cleaned, what is valid and what isn’t, how to validate the outputs to ensure they’re useful… All of these tasks are still very human at heart, and they’re the ones that really influence the results of these often labeled “intelligent” approaches. This is why you have a lot of news that goes something like “oh no, this algorithm is racist”. The algorithm is just a bunch of steps data has to go through, it performs no judgement. Judgement has already been performed by the time the dataset gets to the algorithm. Show me the bias in the dataset. Tell me why this data was labeled this way. Explain why the data isn’t properly balanced. We’ve gotten machines to be very good at solving very specific problems, under very specific circumstances; and that’s it. If we don’t give them good inputs, we cannot expect good outputs.
You are currently teaching Computational Principles to first year students of Informatics and Software Engineering. How do you introduce an entire class to such a specific subject, especially when they’re fresh out of high school?
I have to start by saying that I do not teach alone, I am a part of a team. The other professors have much more experience than me and they’ve been teaching the subject for a while now… Which makes it a lot easier for me. A lot of stuff was organized beforehand, so a lot of it is more a matter of when to approach a topic than the how.
Now, Computational Principles is a very very introductory class, so I’m covering basics that don’t really require much previous knowledge, or that are being simultaneously approached in other subjects that first year students have. If you’re having all the subjects in the first year at the same time, which is the most common scenario, there’ll be some synergies between them. For example, by the time we’re finally getting to Linux commands or shell scripting, students have already got a sample of some coding principles in their Algorithms and Programming class, so, while it’s a different setup, there are still a lot of similarities.
I usually don’t have to cover basics such as how value attribution works, or about control structures. But when it comes to information representation or arithmetic in different bases, 9 times out of 10 the students have never had contact with the topic before. And it’s only natural that when you’ve spent an entire life working with 10 digits (from 0 to 9) to represent numbers, i.e., the decimal system, you’re very likely to be confused when the professor tells you “Ok, now do the same thing but in binary, where you only have two digits to represent everything”. The fundamentals are the same, but the form is different enough to raise questions and that’s only natural. A lot of time you just have to remind the students that they actually already know how to do this, that they’ve had maths their entire lives. And then you teach them to do division again, which is always fun.
Stay tuned for the second part of Alda’s interview!