Talking digital and AI: An interdisciplinary approach
Dr Riza Batista-Navarro, Dr Mauricio Álvarez and Dr Filip Bialy discuss the always-evolving fields of digital technology and artificial intelligence (AI) – and Manchester’s important role in their past, present and future.
Listen on:
Sitting down with host Andy Spinoza to talk all things digital and AI are Dr Riza Batista-Navarro, Senior Lecturer in Text Mining at the University; Dr Mauricio Álvarez, Senior Lecturer in Machine Learning; and Dr Filip Bialy, Research Associate here at Manchester, Assistant Professor at Adam Mickiewicz University in Poznań, Poland, and Lecturer at the European New School of Digital Studies.
Our experts discuss Manchester technological innovation from the Industrial Revolution to present day, including the development of the Manchester ‘Baby’ – the world’s first electric stored-program computer – and Alan Turing’s pioneering work in AI.
They size up the ethical and political implications of AI and digital advancements and evaluate the University’s current position as a leading centre for progress in this field – aiming to drive innovation and leverage these powerful technologies for the greater good.
Find out more on:
- Digital Futures research platform
- The Institute for Data Science and AI (IDSAI)
- Centre for AI Fundamentals
- Turing Innovation Catalyst (TIC)
- Department of Computer Science
- Centre for Digital Trust and Society (CDTS)
- Digital Campaigning and Electoral Democracy (DiCED) – five-year, cross-national EU-funded project
- SpiNNaker computing platform – enabling brain-inspired AI
- AI – Should we be pessimistic or optimistic? – Alumni Association panel discussion on the future of AI technology
- Watch: Artificial intelligence at The University of Manchester
Hello and welcome to Talk 200, a lecture and podcast series to celebrate The University of Manchester's bicentenary year.
Our 200th anniversary is a time to celebrate 200 years of learning, innovation and research. 200 years of our incredible people and community, 200 years of global influence.
In this series, we'll be hearing from some of the nation's foremost scientists, thinkers and social commentators, plus many other voices from across our university community, as we explore the big topics affecting us all.
In today’s episode, we’ll discuss Manchester’s historical and current contributions in driving the technological advancements that shape our world.
From the first stored-program computer and Alan Turing’s conception of thinking machines to the cutting-edge research currently being done into artificial intelligence, Manchester has long been at the centre of technological and scientific progress.
Can I ask you to introduce yourselves?
Yes, thank you. I'm Riza Batista-Navarro, I'm a Senior Lecturer in Text Mining.
I'm based at the Department of Computer Science in the School of Engineering here at the University.
In my research and teaching, I focus on natural language processing (or NLP) and text mining, and more broadly speaking, artificial intelligence (AI).
My approach to AI is basically developing methods. I specialise in a sub-area known as information extraction; that's what I did in my PhD. But then, I have this view of applying those methods to sustainable development goals, to try to achieve those SDGs.
So, if I were to categorise or to put a label on what I do, I would say AI and text mining for social good.
My name is Mauricio Álvarez, I'm a Senior Lecturer in Machine Learning.
I lead some machine learning initiatives in the Department.
So, I’m currently the Director of a new AI centre for doctoral training in decision making for complex systems.
My basic area of research – so I specialise in a particular area of machine learning, which is called probabilistic machine learning – so basically, machine learning is a sort of a black box model that learns from datasets.
The particularly special thing about probabilistic machine learning is that these models are able to provide uncertainty quantification besides prediction. So, it's not only saying: “This is what my model predicts”, but “I can also give you a measure of uncertainty about that prediction.”
I develop those kinds of models for different types of applications: from healthcare to following, or trying to predict pollution concentration.
Hello, I'm Filip Bialy, I have a PhD in Political Theory and background in Computer Science.
I am a Research Associate in Digital Campaigning in the Electoral Democracy project, which concentrates on studying how digital data driven tools impact campaigning around the world.
My own research also focuses on how narratives about AI and politics may impact public decision-making and our understanding of technology and politics.
If we can start by thinking about the tradition of innovation at Manchester University, over a 200-year period, advances in manufacturing and engineering, and how the digital and computing area has also followed that line of, with the 'Baby’ computer in 1948, led by Professor Tom Kilburn.
Mauricio, could you give us more detail on that?
I think the role of Kilburn and the team he was around to come up with the design of the first stored-program computer was actually instrumental to founding our Department of Computer Science here, in 1968, and all the lead up to the commercial version of that machine.
And the role that Turing also had in collaborating with them, creating this extraordinary sort of computer at the time.
So just to continue from that, it basically signalled the beginning of modern computing, and it all happened in Manchester. And it's quite a significant advancement because, prior to that, to make a machine do a certain task, you had to reprogram it each time.
If not for the capability of a machine to essentially have a memory and to store a program. So, when the ‘Baby’ was developed, that kind of proved to everyone that “oh yeah, okay, it's possible to store a program for a computer or for a machine to have a memory”. And that's become influential.
I think it's worth mentioning that Manchester’s ‘Baby’ was developed as part of a project which was funded by Royal Society in 1946. And the grant recipient was Professor Max Newman, a mathematician from Cambridge who was actually a lecturer at Cambridge at the time before the war, when Alan Turing attended the University.
And it was Newman's lectures that inspired Turing to write his groundbreaking paper, which we now consider to be pivotal in the history of computing, because in this paper, Turing developed the so called ‘Universal Turing Machine’ idea, before the war.
After the war, in 1946, Newman received the grant and founded the computing lab here, in Manchester, and he invited Turing in 1948 to join him in this lab. It is very interesting to see the complex history of the attempts to develop the first computer here because it was first driven purely by scientific academic interest.
Newman wanted to use the computer to solve mathematical problems, but immediately, engineers, and the government, who were interested, in this new post-Cold War era, to use computers for other purposes, joined in the effort. And it created a great, I would say, environment for many people to develop ideas for the first computers.
I mean, can we think about how the Manchester team: Freddie Williams, Tom Kilburn and Geoff Tootill; could they have been able to predict the far-reaching implications of their work?
Do you know if they made any predictions at the time?
That is very interesting because they of course were driven by those pure, academic interests, but they also thought about applications of those machines.
And in 1951, Freddie Williams, in his presentation at the inaugural conference of the Ferranti Mark I in Manchester, mentioned several items that would be investigated as a part of developing the machines.
I will just read out what he said: “It would be partial differential equations arising out of biological calculation.” This is something that Alan Turing was interested in.
“Simultaneous linear differential equations and matrix algebra and their applications to the cotton and aircraft industries and the electricity distribution”. Which is immediately some real-world application. “Tabulation of the gear functions, design of optical systems for your synthesis for X-ray crystallography, design of plate fractioning towers, chest problems.”
So these are very different types of applications that could be conceived as part of developing computers, which means that the interest in real impact on the world was always there.
Can we talk about Alan Turing's legacy, starting with the Turing test, perhaps, and those ideas that he first proposed?
I guess a bit related to what Filip is saying, I think Turing basically anticipated AI. He started talking about ‘thinking’ machines and then he came up with, as you said, the Turing test, which I personally think is very relevant actually nowadays.
So the idea is that you have two participants and a judge, and the judge doesn’t necessarily see the two participants.
The judge doesn't know that one of the participants is a machine and the other is a human and the judge essentially asks a question.
And then based on the answers, the test is whether the judge can tell if the machine participant is a machine or a human.
And nowadays we have kind of this technology all around us, generative AI and large language models, or LLMs, as they're properly known.
There's always this question of ‘how do I know if something, for example, was written or generated by a machine rather than created by a human?’
So, I feel like the Turing test is very much relevant to what we have nowadays.
Yeah, so to add to that, I think there is sort of an idea today that perhaps the Turing test was a historic value, in the sense that it posed a good research agenda for a lot of people in AI to work on for the following years.
But, I think, beyond the Turing test, the problem of artificial intelligence is way more general. It's about whether we can actually create machines that are able to understand. But there is a lot of controversy today as whether large language model, for example, simulate understanding rather than actually doing the understanding.
And I think historically the Turing test is important because it's sort of, back then, trying to define what an intelligent agent is. Instead of going into the definition of what it means to be intelligent, he created this test to look at whether the computer program would actually be able to pass the test or not, and that sort of set the agenda for quite a few decades into AI.
But, in these days, when you talk to one of these chat bots, you get quite surprised by how sophisticated they are, and a lot of people would say: “Well, they’re just passing the Turing test”; but they’re understanding, which is the most fundamental question.
And the creativity, perhaps. So I think there's an ongoing debate around whether machines are actually creative or are they copying, or regurgitating what humans do?
Can we discuss how the University has evolved its approach to technological research over the years, from those early days to where we are now?
Well, I guess from computer science, formalising the education and the training that we do for new generations, since the creation of the Department of Computer Science in 1968.
And just creating all these training programs, getting the proper stuff to start teaching those programs and getting this stuff, creating their own research groups and sort of following that sort of way in which research is done over the world these days.
So that tradition of creating that embodiment where students will go from undergrad to postgraduate research, they will just deep dive into three years of research and then work on very specific problems to then create their research thesis. All of that has taken time to establish.
And there's one of the issues that the technology advances at such a pace, that sometimes courses may... it’s a challenge for the courses to keep up and the teaching.
I think that is not a concern, but it is there. It's just something that we need to kind of deal with, right? So, drawing from my personal experience, I'm teaching a unit called Natural Language Understanding, which is basically drawing upon concepts from natural language processing. It’s necessary, almost year on year, to update the material because of advances in the world.
So you can't keep the same material from two years ago because then you wouldn’t be teaching your students what they need to know to become a practitioner.
So that is a factor there, but also, to add to what Mauricio said, I think the University has always valued interdisciplinarity.
So, we've been talking about history, right? Clearly mathematicians and engineers have collaborated before, to come up with the ‘Baby’, for example. Increasingly I think the way I look at it, the University has encouraged academics and researchers to work on collaborative projects across different faculties and departments.
And I think that approach is quite important because it means that we might be doing something fundamental, let's say, in Computer Science, but then there's always an aspect of looking towards how that can be applied or how that can be used in other domains.
I think that's an important approach to developing this technology here.
Yeah, I mean, all three of you started your careers outside this country. So you must have had an idea of Manchester, which, you know, likes to define itself as a hub of digital innovation.
I'd be interested to know if that's been the case, if that was how you perceived the city?
Certainly, of course, when I first came here to Manchester, I had this image from 19th century of this city of manufacturing with tall chimneys and full of smoke. And I was surprised how beautiful the city is. But I was also surprised at how open the University is and how much it cares about this kind of interdisciplinary work that is necessary today, especially working in the field of digitalisation of politics.
I couldn't work on that subject without collaboration with people from technical sciences, from Computer Science. So it is important that the University actually supports this kind of collaboration.
How did you perceive Manchester?
Yeah, for me, it was that critical must about Artificial Intelligence already in the University. And I guess like the University making an effort to create, again, that environment where talent will thrive. Students wanting to come here to do their PhDs. Researchers that want to follow their career here because they see there is a sizeable group able to take and make a contribution to our case, so, the research I do: fundamentals of artificial intelligence.
And I think that's also possible because again, interdisciplinary diversity is also a very important aspect for... We’ve got big medical schools and the health and life sciences and spin outs... There's a lot for digital technology to apply itself to, isn't there?
Exactly. Yeah.
Yes, it's also a bit similar for me. So, again, knowing about Manchester's rich history drew me to the city.
Like you, I also had this vision of a very industrial city, but also similar to Mauricio, I basically... So I did my PhD in biomedical Text Mining, and I felt like, "Oh, I really want to come to a university where I will be part of a group who's doing interdisciplinary work."
So, for me, that was really important, and diversity is also a key thing. Coming here, I really felt really welcome to the city, so, I think that that was quite important.
AI is changing our world in so many ways, isn't it? And I think we're on the cusp of some large-scale changes that the general public will start to appreciate.
So, can we go around the three of you asking you about your own research and the real-world applications of it?
Yeah, so I’ll make a start. I mentioned earlier that I specialise in an area of natural language processing called ‘information extraction’.
So essentially, briefly speaking, it's about extracting fine-grained information from large amounts of text automatically with the view to be able to understand what's within the text.
So, one of my projects that I'm quite passionate about is how that information extraction can be applied to try to lower the carbon emissions of the food that people consume.
And I'll try to explain a bit. So, I have this idea that when people cook at home or maybe eat out, people have become more conscious about the nutritional value of the food that they eat.
But maybe not so much about the carbon footprint of what they eat. And I feel like if we integrate some AI into the analysis of recipes, for example, so when you cook at home, you follow a recipe, and then if you try to integrate AI so that AI can tell you: "Oops, this recipe is very high carbon, whereas this other recipe is lower carbon”. Then that could potentially influence your decision making at that point: which food to prepare, which food to eat.
So that's one thing that I'm quite passionate about. I work on many other projects, probably too many, but just a couple of other examples: I have a patient student, for example, whose focus is on analysing narratives, dealing with text to try to automatically identify indicators of forced labour; so, this is in the modern slavery space; the idea is to... because you see, not everyone can easily detect whether, let's say, a migrant worker is going through kind of forced labour, so we want to see whether AI and natural language processing can try to understand these texts to try to expose those.
Another project has to do with automatically identifying hate speech in Ethiopian languages, or rather in languages used in Ethiopia.
So, in social media, as we probably all know, unfortunately there's a lot of aggression and hate speech going around. So, the idea is to use AI and natural language processing to automatically detect the occurrence of the hate speech to...
To flag it up to administrators?
To flag it up. Yeah, yeah.
So, as I said in the introduction, I sort of work in the fundamentals of Machine Learning. So, this is developing new models to being able to extract, say, more knowledge or do more with fewer data, basically.
I work both in the Big Data regime. I have applications there, but most of the developments I do are the sort of Low Data regime problems where you don't have a lot of data points.
So, one of the applications I’m working on at the moment, is in drug response curves. This is a joint collaboration with colleagues in Imperial and Singapore, where we have these data sets where we're able to track the response of a cell line to a particular drug for cancer, so different types of cells associated to different cancers.
So basically, you think that these curves can be plotted in terms of the amount of drug that is applied to the cancer cells and how many cells disappear with that drug.
The problem is when we have a new cancer, then these pharmaceutical companies have to do a lot of different tests again, and this is very, very expensive.
So what you would like to do is create new maths, new algorithms, that will be able to tell you what we know now, and based on that, be able to generalise the most with the fewer data samples that we can provide for a new drug, and then try to reconstruct the whole curve, by using just a few tries or samples from that new drug. That has a great impact because it allows us to reduce, particularly for these pharmaceutical companies, the amount of tests that they need to do in the lab.
So not just a reduction in cost but also in terms of time.
Exactly, and lab efforts. So, in machine learning, we tend to call this area ‘transfer learning’. It’s like: “How do you learn in one sort of dataset but then are able to extract from there and use that knowledge to apply it to a different dataset?”
So “how do you transfer that knowledge?” Following on that similar idea, we have this collaboration with Uganda, in Africa, a university there: Makerere University. Uganda is one of the countries that is most affected by air pollution. They cannot afford to buy very low-cost sensors. These are sensors that are really cheap to buy, but the good ones are very expensive. So again, the idea is: how do you create a joint machine learning model that would be able to learn both from the low cost sensors, but also from the few high quality sensors? Learning from all of these, you are able to improve the type of predictions that you could get, if you had the high-cost sensors that you can really buy.
So that is the kind of research that I'm interested at the moment.
Thank you. A couple of great examples there, thank you. And Filip, your area is AI as applied to the political arena?
It is, AI and other digital tools. So, within the project ‘digital campaigning and electoral democracy’ led by Professor Rachel Gibson, we are trying to answer the question that gathered a lot of public attention in the last decade, I would say, after the 2015 elections in the US with the so-called ‘Cambridge Facebook ‘Cambridge Analytica Facebook scandal’, in which there was this process of microtargeting voters using advanced data-driven tools.
And the question that we want to answer is whether it is actually the case, whether political campaigns are using those tools.
So, I think it's of public interest as such. But since then, of course, we have witnessed the introduction of generative AI with ChatGPT being introduced in the fall of 2022, I think.
So now we add it to the problem, the use and misuse of AI powered tools. So, I would say that it immediately has this value of being interesting to the public.
But in more general terms, what I'm working on is also about looking at the narratives on how AI is impacting politics and democracy, because we hear those stories all the time. But is it actually the case? And it is important to verify such stories because they can impact decision making, for example, on how this government here in the UK and other governments will react, what they will introduce, what policies they will introduce in order to prevent the abuse of AI.
And if the impact is not actually that big, and I would say that, for now at least, it doesn't look like it is significant, we may need different type of legislation than if it was actually impacting the electoral process significantly.
So, your work really feeds into the debate about the ethical implications of AI and, I mean, within the political arena.
Yeah, so ethics, which has been there for a long time, has now gathered this renewed attention, I would say, after 2015, after the famous AlphaGo, which was this AI that was able to win in Go, which is a game much more complex than chess.
Since then, people started thinking that maybe we should pay more attention to development in this field because they may impact society.
And I would say that it is important to look at those narratives about the impact and also look at how it changes the way people act, not just in the sphere of politics, but elsewhere. How it impacts the relationships, how it impacts the way we work, and whether we should be afraid of being replaced by AI.
So, these are societal implications. I would say the field I'm working in and the field that has a lot of people focused on, for particular problems, is called ‘societal impacts of AI’. It is broader than just ethics because it includes economics, it includes politics, and of course it must be based on some ethical consideration, but it is much broader now thanks to those recent developments.
And it's a broader conversation, isn’t it, about whether it's even possible for regulation, you know, for governments to regulate or for a big business to regulate itself? I mean, can I just throw open that idea? Is regulation of AI even possible, desirable?
I think, yes, in the sense that as far as I'm aware, there is already a European AI law which is looking into regulating these kind of AI-based applications.
As far as I recall, it outlines certain criteria that, if what you are developing as someone working in the space of AI falls under certain criteria, then, that might be not allowed at all.
An example of that is an application that is essentially monitoring or surveilling people.
But then, if what you're developing as an AI practitioner is at a lower risk, then it might be allowed, but again, under certain parameters.
So, I think it should be possible. The question is more about the regulation that might be there. It's more about how you enforce it.
We have laws against cybercrime, but you don't stop people from doing cybercrime.
And there's also the tension between, again, enforcing it. And, there's pressure from, the private sector and from industry because obviously they have their own interests, and that might not necessarily align with what's in the regulations. They will always lobby or push back on those regulations, and they're quite influential.
So that's also a reason why sometimes the regulations are not properly enforced.
Yeah, and it also leads to the ability of the public to have access to those tools. There is now a division of a camp that thinks: “We should heavily regulate AI because otherwise these AIs might become conscious or autonomous at some point.” And then we won't be able to control them. So only...
So this is an Elon Musk scenario?
Yeah, exactly.
Most of the public, if they've got any understanding of AI, it might be informed by some of his nightmarish predictions.
Yeah, exactly. But then on the other hand, we have this movement advocating the need to have open science, to have open AI.
We need to have open another company, but the project where public can actually access these tools and know what these tools are, and how they work.
And it also comes down to the difference in budgets that each of those projects can have, because while private companies can invest billions of dollars, I don't think governments are doing the same. They're doing some investment, but they're not going to be at the same level that these private companies.
Well, that moves nicely onto my question for Filip, really, in terms of what are the implications of power for AI on democracy, if that AI is controlled by big private sector interests?
So that is why it is interesting to look at how governments and international organisations are trying to regulate AI, because it says a lot of how much they are afraid of those impacts, but also how sometimes misunderstood the idea of technology and relationship of technology in society is.
So, speaking about the European Union's approach to AI, the introduction of the AI Act this year. Well, it solves some issues, but there are also some problems with that, because it treats AI as just one thing.
But AI is not just this underlying teochnology based in machine learning, it is a variety of different applications. Some of them are quite important to develop because they can help in medical research, for example, but some of them are purely commercial.
And then there is another layer of how people will use those tools, because you may have this intended purpose of using ChatGPT to learn something, to maybe write something for, let's say, marketing purposes, and it is something that people are doing right now.
But it can also inform some of the dangerous activities, and it could be used to automate propaganda and disinformation.
So, there are many different applications of the technology, and it would require separate regulation of each sphere.
So, you are saying that the EU approach is too crude and not nuanced enough to deal with those separate.
You tried to balance two things: innovation, support for European companies that are trying to develop new AI systems, and safety for consumers, because that is another thing: EU tried to regulate AI as a product or AI systems as products. So, the approach was not actually to prevent harms to democracy as such, but to create a safe market for the products to be sold and developed.
We are still living in a world in which governments, and political parties for that matter, if we will look at the manifestos of political parties, for example, in the last election in the UK, the manifestos are very vague about AI, very uncritical also about the AI potential to change the economy and so on.
But we need more attention to particular misuses of AI, and it is still not properly regulated.
I mean, you mentioned the manifestos, and there are several significant elections across the world this year.
In your sort of monitoring role, can you see that certain elections have had AI applications involved in them or what's your general feeling about AI in politics this year?
In general, I would say that we need to resist this temptation to look at the relationship between AI and politics in this manner that is informed by so-called ‘technological determinism’.
Technology is certainly impacting democratic processes, but it is not the only force.
If we look at elections in different countries, we may observe that in those countries that have more established democratic systems, the impact of malicious use of AI is much lower than it is in less established democracies.
For example, in 2023 in Turkey, the impact of deepfakes, those videos that could be created and are generated with the use of AI was significant.
Some commentators said that because the deepfake was used to force one of the candidates to resign, then the results were different than expected.
It is different in democracies such as the UK's democracy or in the US, also in Poland, my home country, where we observed that political parties used AI, for example, to generate the voice of their opponents from the other side of the political spectrum.
So, it was certainly used, but the impact is not just technological. It is about the fact that countries and democracies have histories. In the UK especially, what are the factors that impact the election. The fact that the country went through the period of 14 years of Conservative government, Brexit, Covid, austerity, and so on. So, these are those societal political processes that can be somehow used by people who will then use technology to somehow attempt to impact the voters, but those societal political processes are too complex to be just impacted by technology.
Thank you. You mentioned deepfakes and I don't know...
For us, education has become also a very important issue because now when undergrad students or postgraduate taught students need to write their dissertations, then they have the use of the help of generative AI.
Sure, well let's talk about that.
And then for us, it's not so much about deepfakes, but about whether these students are producing these works themselves or getting help from generative AI. And I think it has sort of taken and shaken down all the Higher Education system because different universities have different approaches to how to deal with these, and I think this it’s still an open discussion.
So, what you want or what we would like nowadays is a tool that would detect whether a piece of text is actually written by a human or a machine.
And such tools, researchers have been working on that problem. The question being, is it possible to develop a tool that can reliably detect the difference?
And I don't think it's quite there yet from what I recall. It's still an open problem, as Mauricio said. So, it is still a challenge.
As you said, the impact on our teaching is that... I don't know about you, but for me, my assessments, for example, I've steered away from kind of essay-based written work. It’s not that I don't trust the students; it's just to kind of say: “Ok, let's just go for oral presentations.”
Be fair to everybody?
Yeah, exactly. Yeah, yeah.
So, because, as you said, you want to be giving marks based on what you trust is their own work.
So yeah, so definitely, as you said, it's shaken up how we do things.
Can we see even a return to real time in person, old style, sitting in an exam hall with a pen and paper to really test students' knowledge in this day and age?
Yeah. Well, it's a good question because then you think about how we should actually assess students, because these students are going to go outside to the real world, and what are the actual tools that they're going to be using?
So what do we actually need to teach these students? What is it we actually need to learn, that's going to be effective for them to be a part of the workforce later?
Well, we’re getting into a really deep area of what is knowledge in this day and age.
But if I could maybe jump in on this point because I developed this metaphor, I don't know whether you will find it useful, in which we may contrast our current approach to teaching as this ‘Robinson Crusoe’ paradigm.
When we think about our students, that they should be able to somehow survive if they will find themselves on this island without any tools, without anything to help them. And the other approach would be to use this concept of ‘extended mind’, which was proposed by a few philosophers in the 1990s. And they said, if something is at our disposal, like a notebook with the address of the gallery that I want to go to, it is as if it is a part of our own mind. So maybe now with those tools that are constantly available to us, we should stop thinking that only what matters is in our biological brain. It is also in our tools that we have in our disposal.
And we, as you suggested, should maybe adjust the way we approach teaching, knowing that students will go and, in their job, they will have access to those tools.
So, we should not take away those tools, as you would suggest. Although I certainly agree with what you would try to achieve, we want them to learn for themselves and to think without any additional help.
Because an old-fashioned response would be "If you enable them to find every piece of information through devices, then it takes away curiosity."
So that is how we need to speak to them openly, to tell them: “If you start using AI from the first day at the University, you will do harm to yourself, it will not be that you will cheat anyone, because you will be cheating yourself in the end.”
So how are we training the next generation of AI students, researchers, scientists?
Yeah, so I think it is very important that we at the University are keeping sort of the train at two levels.
So, one level is sort of on the fundamental level. We have the Centre for AI Fundamentals that wants to create these critical maps of researchers on developing sort of models and algorithms.
But also it is how the AI sort of applied to many different sort of engineering areas and science.
So, we recently got funding to fund this UKRI AI centre for Doctoral Training on decision making for Complex Systems.
The plan is to train 72 new PhD students in the following five years.
We are going to have five cohorts each year, it is going to have around 14 students.
The idea is that we are going to be forming these interdisciplinary teams between experts in AI, but also, experts in particular sciences, for example in astronomy, physics, engineering, biology, material science.
And I think looking ahead, what we are seeing more every time is how AI is sort of impacting a lot of different domains outside of the traditional ones like computer vision and natural language processing.
We are now seeing how we can apply AI for material discoveries such as drug design. These were areas that were perhaps not so closely related in the past and people didn't think: “Oh, we could apply AI for that.”
So, I think interdisciplinary is something really important for us and Manchester and also at our centre specifically.
Thank you. And Manchester has been recognised by the government, hasn't it, with something called The Manchester Prize from the UK's Department for Science, Innovation and Technology.
Riza, could you tell us something about that?
I think it's very nice to be recognised, you know, for a prize like that to be named after Manchester, again in recognition of Manchester's role in digital innovation, computing and AI.
It will happen in several rounds. The first round was focused on, as far as I know, energy, environment and infrastructure. I think it was very popular, I think more than 400 applicants applied.
The short-listing of the ten finalists, so they have, they get, they got, rather, 100k pounds for kind of, you know, starting up the work. And then the grand prize winner will get, I think, one million sometime next year to further develop the technology.
But essentially, it's a very nice way of demonstrating how AI, particularly, can be applied to certain areas like energy, environment and infrastructure, to make this really huge social impact.
And I think, yeah, it's a good way, again, of kind of fostering, because it's open to anyone. So, it's not just for academics, researchers... it’s for start-ups and companies. It's a nice way of kind of fostering a lot of activity in this area of different applications of AI here.
Absolutely, and we, the University, we've got a kind of a platform called Digital Futures.
It's one of the three platforms in the University, so the Digital Futures one essentially brings together academics, researchers, students working in the space of computing, data sciences, AI.
But the nice thing is that they have themes which, again, highlight how these kinds of technologies can be applied to, let's say, healthcare, or sustainability and things like that.
It’s a nice way of networking with people from across the different faculties.
To help that interdisciplinarity.
Exactly, yeah, so it's a nice way of meeting people and forming new collaborations across the different departments and faculties in the University, yeah.
So can I ask if, you know, we're all excited about the potential for digital and AI, or are you concerned about implications of AI for society?
I think that, like many people, I am excited when I see how powerful those new AI power tools are, how generative AI could be used for supporting learning, for supporting, not replacing human creativity. I think that is important to stress.
But I am, of course, also concerned, for example, about how the benefits from the technology are being currently distributed.
So, there is a lot of inequality in the world and the new advancement in AI generate profits, but it is as always as usual, profit for a particular group of people based in Silicon Valley, not everywhere in the world.
And also the costs of the technological progress. People tend to think about technological progress as if it is the same as societal or moral progress sometimes.
It is different, it is relying on completely different logic, and just because we have more powerful tools, it doesn't mean that we will be able to solve those societal problems.
So, I am worried that people may start to think with this technological solutionism in mind. They will think that because we have the technology, now we are able to solve deep problems with inequality, exclusion, discrimination and so on.
But what we see thus far is that technology may contribute to those problems and make them more severe. So that is something that I would be worried about.
Yeah, so I think there has been a lot of extraordinary progress and things like, for example, AlphaFold is a many-year’s-old mystery or challenge of how to create a 3D structure using raw data. I mean, those kinds of advances, I think they are going to be... come perhaps more often.
And I think that is mind blowing because the number of applications and solutions that can come from that, for the general benefit of society are huge.
I think one of the things that worries me is how us researchers make our products, sort of, reproducible, because one of the crises that we see for AI these days, or starting to see is that a lot of people creating models or applying, you know, publishing papers that people can't really reproduce. And there is a lot of concern that we are not doing science anymore, and people are just for the sake of doing AI end up doing things that are not the adequate response or the adequate way of doing... solving a particular problem.
So, I think we need to do a lot of work in terms of getting new generations trained in the best practices so that when they get AI products out there, they're sort of conscious of all the aspects, reproducibility but also the ethical implications.
Yes, I am also optimistic in the sense that I do think that these recent advances in technology bring a lot of benefits. So, you spoke about drug development earlier as an example. I think AI is increasingly being used in that space and I feel like maybe in the near future with the help of AI, better drugs will be developed, drugs which can help cure diseases that remain a challenge today.
Looking forward, what do each of you hope our third century will bring in your area
Yeah, so I'm hoping for even more interdisciplinary work and collaborations where we develop methods, technologies, models from the AI side of things and then we continue to closely work with collaborators and colleagues from other disciplines to see how these methods can be applied in those particular areas, but also particularly I did mention at the beginning that I'm looking at my research as something that potentially can be applied to sustainable development goals, or SDG.
I am hoping with the University's Research Beacons as well that there will be even more space, more scope to show how our research ticks those boxes when it comes to those SDGs.
One particular area especially is, I think this is also emerging now, like ‘sustainable AI’ and ‘green AI’, right? So, it's not just about how the methods or the models we develop in AI can be applied for sustainability but also making sure that the models we develop are actually sustainable.
Because, you see, these models come with the cost as well, right? So, when we develop and we train these models, we use a lot of compute and so on. How do we do that in a more sustainable way as well?
Considering that the University did very well, in sustainable rankings, it would be nice to kind of marry that, where our research is really contributing to that sustainability.
Maybe the engineers can work on ways of data sensors do this computing with less energy.
Yeah, I think I would like to see more research going on explainable AI and trying to really understand what is underneath, open the black box.
I think that would be for me, like the future. Because I think, if we don't have as humans, confidence in what these systems are doing, it's going to be really difficult to convince people this is a good way to have a helper or replacement for many different aspects.
So greater trust among population?
Exactly.
About AI, and how it works?
Exactly, and the truth is that we don't know. These days, we don’t know even if you speak to AI engineers, experts on LLMs, they really don't know what's happening inside this large neural network.
I think as we understand more into... going into the future and we understand more how these systems actually work and potentially bring a lot of different changes in how people actually perceive these AI tools.
But my hope for research in that field is that, with growing awareness of what the actual impact of AI is, and other digital tools on politics and society, we will be able to somehow also have more democratic control over it.
Because in the end, what we are trying to do is to gather knowledge and give that knowledge to society, so society could decide what we will do with the technology. And I would hope that democratic control over the development of technology will also be an issue, an important issue in politics.
Well, that's a very fitting and hopeful note on which to end. So can I thank all of you for taking part in this session.
To stay up to date with everything Talk 200, be sure to follow and subscribe to the series on the podcasting platform of your choice.
Head to manchester.ac.uk/200 to find out more about this series and all the activity taking place across our bicentenary year. Use the hashtag #UOM200 to engage with Talk 200 and our wider bicentenary celebrations on social media.
Thank you for joining us for this episode of Talk 200, a University of Manchester series.