00:01:48
And for this purpose, we make use of large language models. And in this research field, it's very exciting at times because, we see that language models have some very interesting properties regarding language. And they also, yeah, they also fit very well with how the how brain represents language, and there are some very interesting properties. And so, very interesting research happening right now using language models to study how the brain processes language. Nice. So, can you tell us a bit more indeed about this? The Brain GPT, in this context, as part of your understanding of your research, like how does it actually work? Like, is it that the brain connects to ChatGPT to generate content? Why is it called Brain GPT?
00:02:40
Yeah, um, so the sort of the the thing that inspired this this podcast was in a blog post of mine for EOS, in which I describe how language models can be used to sort of decode brain activity and it's called Brain GPT. Um, that was just the name I made up for it because it's actually not an application as of now, but it was um, in this blog post I described research that has been done where they use language models to decode brain activity. So, they use non-invasive brain recordings of participants listening to speech and once they have a lot of speech, they can sort of compare the representation of that speech in a participant's brain to the representation of that text by a language model and by doing so, they can build a sort of linear decoder that bridges the brain activation to the meaning uh represented at that time, the meaning.
00:03:43
Of the text, they are hearing, and by doing this, they can decode activity of newly heard input using this um, this language model, so this is a very, very interesting um, sort of new um frontier in research, because previously, what was done is they used motor representations of for example speech participants had to imagine saying a certain utterance, and then they can decode what they were thinking of saying based on their motor representations, but now they can also decode the meaning of speech and then they can decode the meaning of speech based on their motor representations, meaning immediately from the language network. In the brain, so there's no longer this intermediate step of using the motor representations and this is quite impressive because the language network in the brain is incredibly vast, it's incredibly complex and it's spread out over a wide area.
00:04:42
So using this um yeah so this uh new um advancement of being able to decode what is the meaning of speech and what is the meaning of speech and what is the meaning of speech participants are hearing based solely on their brain activity in the language network, using this high-performant language network is incredibly interesting because it could might be used in um in applications. Downstream, but of course there are many many um obstacles still in the way before something like that is possible. Like, um, for example, now Neuralink has implanted a device that records brain activity, but when they do this, they always implant it in the motor cortex, so there's sort of much more um much less complex brain activity compared to the language network, so it's a lot easier to decode based on, for example, with an electrode in someone's motor cortex, what's their hand movement they are imagining, compared to using an electrode to decode the language network, so it's a lot easier to decode based on, for example, An electrode,
00:05:54
so it's a lot easier to decode based on for example with an electrode what they are thinking about, so is that sort of clear why it's such an advancement now that they can use non-invasive brain recordings, so to decode basically what I understand right? So it's not connected to the internet but it helps tap into the and unlocking the full potential of what's inside let's say a person's brain to then be able to transcribe it into maybe words that they're not able to speak because they're not able to talk or like if they're not really that fluid, that it still comes out as like let's say a fluid sentence, yeah indeed. so that's that's
00:06:29
the main thing that would be the application that everybody is looking at is people that are incapable of speaking or producing the utterance but are still capable of thinking of the meaning so for example people with aphasia they might be able to understand language they can come up with sentences but they are unable to understand the language so they can't come up with sentences but they are able to understand the language so they can't come up with sentences these sentences into a motor command uh to their mouth that speaks this utterance so if you could then bridge this gap by using a language model to decode what they are thinking of you could
00:07:05
skip this step and help uh people like that i think that's going to be a really insane advancement like because like the other only example in that context that i that that rings a bell is like you know stephen hawking when he has to like type his words on the computer but then instead they don't they don't even wouldn't even need to type anymore they would they wouldn't even need to type anymore just need to think about the words and what could be the other applications like not considering the obstacles like what's the sky like what's the limit with these things So, if you would consider that so okay, if we go very far, then something like telepathy for example purely hypothetically, if we both have a neural link implant, I could think of something and could transfer it to you.
00:07:50
Yeah, but that is of course that is still not possible and it's probably never going to be possible unless maybe some crazy innovation because this language network is so large, so you cannot decode the meaning somebody is thinking of using if one electrode or a few electrodes; you would need to have an electrode that spans the entire frontal and temporal cortex to sort of decode what they are thinking of and Then, you would need to have an electrode that spans the entire frontal and also, the device needs to be large enough to house a very powerful language model. And the language models, the top of the state-of-the-art language models right now, have something of a 405 billion parameters; the latest LLaMA version, which is, of course, like 100 gigabytes worth of language model.
00:08:46
So even if you have electrodes that could go into the brain to read out brain data, you still would need to be able to sort of use a language model on that sort of implant. And even if you could do that, you would also need to transmit it fast enough and Accurately enough to be able to like communicate effectively with it, but that's also something another link has now made a challenge of because they saw that with the data they are acquiring from the brain, it needs to be transmitted much faster than is now possible um through wireless transfer. So they have um wrote out a challenge for lossless compression, so a challenge to like compress the signal derived from the brain so it can be transmitted much faster um to the device and what is it called?
00:09:44
Lossless compression, yeah, so it's compressed without any loss of information. For example, if you were to transmit a signal to the brain you would need to be able to transmit a signal to the brain and you would stream a video if the quality can be lower to like keep streaming right yeah and then you sort of decline the yeah like have a decline in accuracy but when you the what Elon musk wants is to keep the accuracy stable and without any loss of information so and based on your solution that you have explained so you want to say that you have a better solution than Elon musk has like that the neural link actually put it in the wrong
00:10:22
place that you would put it in different one or uh no I wouldn't say that because like for example this is a this is a good Step for now because um, you wouldn't be able to put it in the language network as of now and get good results, I think that's a step too far. I think this is a very good step to take in between and then to scale it up maybe to um, to another step once we have much more information and we have a lot of information, we can do a lot of things and we can do a lot of things on how the brain represents language and how it is represented in language models because we can decode language from the brain using language network and these non-invasive brain recordings but still the accuracy is not
00:11:10
that good so there's still a lot of research and innovation to be done to get this in a very high level and to understand even better what it is in a language model that makes it fit the brain activity so well and what is it that um that enables it to decode the representation in the brain to also then build models specifically for this purpose that are even more accurate so i think there's a lot to be done i have a question here like uh because this episode is called brain gpt so then because i can imagine like having a trip uh in my brain as we were discussing it's already like happening with the neural link which is actually helping people to transmit the language but what Are actual ethical risks and the issues what AI can actually be facing because, is it because I can like sometimes people cannot you know like even be with themselves and with their brain without actually like a chip?
00:12:17
So uh when there will be something else inside your brain, is there like some ethical risks currently that like uh if the AI would actually be like uh yeah, that's actually a very good question. And the paper I mentioned in the the blog post also tested this so they first of all they saw that they could decode what people were thinking of very accurately using the representation of language models and They also investigated whether you could actively combat this, and then they also investigated whether you could actively combat this, whether you could make sure that the language model was not able to decode what you were thinking about. And they investigated a couple of strategies, one of which was, for example, counting by increases of seven, sort of just doing a double task, thinking of other things while listening to something.
00:13:09
And that also inhibited good accuracy of the model. So, doing another task, thinking of something else, or thinking of another story in your head, that makes sure that the model cannot read out what you are hearing or thinking about. And another thing that makes it maybe reassuring for the doom thinkers is that you are not able to if you train a certain decoder, so you have a language model and you have brain recordings of a participant, and you train a decoder to bridge these two. And then you can sort of know, okay, that person is thinking of this sentence at this moment. That decoder is very specific to one participant. So if you train a decoder on the data of one participant, you cannot generalize it to another.
00:14:03
So, as of now, if there were something like a brain GPT, an implant that reads out your data, it would still be very specific to you. And it would require, for example, a very long calibration phase before it would work. So, if you train a decoder on the data of one participant, and it wouldn't transfer to another person. So, in those terms, you can combat it. And also the ethical risk of breach by someone else is rather small right now. That's curious. So, it's not something for multitaskers. Like with multitaskers, the LLM would break down a bit. Yeah, I think so. At least what the data appears to be saying. Would it be able to help you? I have so many questions, actually.
00:14:51
But the first one is: Would it be able to help again, unshackled, if you would install it, let's say, within your partner's brain, help you decode what's on their mind? Yeah. So that you actually know what they're thinking and not what they're saying. It might also be very unethical for yourself because you will learn some things about yourself that you didn't know. I would not. You should have privacy to give access to the certain. Yeah. Yeah, but that takes us to the other ethical dilemma. Can you hack, theoretically, can this kind of device be hacked and can it then be used to basically also influence your thinking? Or it's at the moment only one-way traffic or it can become that.
00:15:44
At the moment, the research right now, based on the research on decoding brain activity using large language models is just one-way street. Of course, you just have brain activity and you look at what are they thinking. Also, the Neuralink implants is a one-way street. They just record brain activity with some electrodes implanted and then they just use that information to power some other application. But in principle, if you could also influence brain activity, you could influence what the brain is representing at that moment. So, for example, there are applications like TDCS, and transcranial direct current stimulation, and that is sort of a very small current that is sent through the brain. And it's currently being used in applications like, for example, treatment against depression.
00:16:46
It might be that there's some brain region that is, you know, too active or is not active enough and that by using this stimulation you can sort of balance the activity out to help people struggling with some severe form of depression the same is for example tms is transcranial magnetic stimulation which uses instead of a current a magnetic pulse to sort of just disturb brain activation in general and all these applications they have an impact on our brain and our brain activation but as of right now it's very difficult to sort of steer the activation in the direction we want so you can just generally inhibit a certain region or you can activate it more and have some general influence on people's behaviour or feelings but you cannot specifically steer it to for example a thought
00:17:45
or one representation in the head yeah I mean if that if that would be possible I think you would open another can of worms either in the positive direction but also super yeah sci-fi it's another black black mirror episode yeah that's uh that's Really, that's really curious. Yeah, there are a lot of like ethical questions which I think will also go with the development of narrowing uh so, and I have also like, do you actually like Elon Musk? Like when you look at what he's doing, are you like, is it like you're like an idol or what are other players in the field that are playing the game in this aspect currently? I would not say he's my idol per se. I do admire the entrepreneurial spirit and innovation that gets sparked in these various companies.
00:18:43
And especially things like Neuralink, there is also a lot of controversy with regard to like animals that were used during the testing phase of the Neuralink implants and some other ethical considerations. So I think everything he does needs to be looked at through a critical lens. But in general, it seems like he really sparked some major innovations across many fields. So that's admirable, at least. Yeah. He's a challenger. He breaks the status quo. Yeah. Sometimes for the positive, sometimes for different reasons. But his mindset is difficult to follow. Yeah, definitely. So as I understood, have you met him or have you seen him speak at one of the conferences? We are in the same room with him. Yeah, the room is very broadly, like, it's very broad.
00:19:38
It was like, you know, not a concert hall, but like, I think it's like a sports arena. It was like a sports arena near the Expo in Paris. And then last year he was there physically, which was pretty cool because then he like called out his mom who was sitting there and his family. And I still remember last year was not what I expected. I mean, he's a genius. Like, you can see it in the way he's talking. He's like, he's a genius. His brain works too fast for his speech. Yeah, it's incredible. And then this year we saw him on a big screen. In the same room. But it was more curious discussion because we could ask questions ourselves. So, like, it was more like challenging for him to answer extra questions from the audience.
00:20:27
Whereas when he was in the room, like himself physically, it was more like, you know, like there were prescribed questions. And even though like they tried to make it challenging, it was still from like moderator who was trained on doing this. So, yeah, there was like both sessions. So I had something about itself, but it's; it was really nice to see and be with him in one room because it was one room, but yeah, regarding this, like, Brain GPT: it's also like you have a really interesting career that you went through, because what I understood is that you were with Cedric during the bioengineering and then you stepped; then you started with psychology. So if someone would like to work with this kind of, with this kind of innovation, what would you recommend as a career path?
00:21:18
Should they also do a similar kind of path as you had, like to drop out from the bioengineering and go to psychology? Because I think that this gave you also some kind of really added value and unique value proposition in, in, in the field that you have from both, like, psychology and engineering. Or what would you recommend to people? People who are interested in this field? Yeah, that's actually a very good question. Very interesting to think about. So what I think, I think it's, you need to be very multidimensional. So I think the combination of psychology and cognitive science in combination with some love for technology or computer science is an ideal combination in this case because you're dealing with. Yeah. The complexity of language.
00:22:12
The complexity of language and linguistics, and how people deal with language, how language is represented in the brain, which is, which, of which there is a very strong tradition in cognitive science and a lot of things to be learned. But then, of course, to use these models, to implement these models, to train and, yes, to find new ways and build better models that fit better these brain data. There's, of course, a lot of technical knowledge that goes into that. And I think people with a, a background in computer science or some engineering background do have a strong advantage in that regard. So I think the yeah the best thing is some interdisciplinarity. So just both worlds coming together and, and then using the knowledge on language of cognitive science combined with, with computer science skills, engineering to build some, some amazing models.
00:23:11
That both enhance our understanding of language and the brains and, and also build some cool stuff while doing it. And when you are looking at current like innovation regarding the brain, et cetera, but because I cannot imagine like, you know, current, like always like the people they are like changing. So like we had a horses in the past, then we switched to cars. Then there was like internet, which was on the computer, which was big. Yeah. Really like that. You couldn't like fit it in your, in your pocket or even in your backpack. And now like, it's more like a smartphone. So let's say that my, that my career would be transforming in this aspect. So does it mean that in the future I would not need to have actually phone or my laptop and I would be like working on live; like a chip in my brain or how this would work?
00:24:08
That's a, that's a very, very far, far future. I think this is at best a dream at the, at this point. So I don't think we'll get there. Maybe not ever. But it's definitely very interesting: the roads to get there. And I think right now for the, the progression in the field on like how brains represent language and how language models can be used to, to, to, to encode brain activity and to, to enhance our understanding of how brains deal with language. I think now the main bottleneck is actually just the data. So these models are trained on more and more data and they require trillions of tokens to be trained on. And they are often mined from various web pages. So I think the quality right now, the data can be improved upon drastically.
00:25:10
So, I think for futures, if you want to sort of to work in this, a good, maybe not the most exciting part, but a very necessary part is just how do we get more quality data that is quite on ethical, in an ethical way. And that is, that can be used to train these models and improve them even further. So, I think that's the, the main thing that's currently in the way of further improvements in this field. The need for data, like data quality, it's going to be important for everything that that's going to be based on technology, like these LLMs, but anything, anything for a company trying to leverage the new technologies, data. I mean, I just get so annoyed by it actually, because everybody wants the output, but nobody, I mean, not a lot of people are actually willing to invest in making sure that the quality input is also there, but then they do expect the output to be great, right?
00:26:19
Yeah. A very funny anecdote in this regard is also a lot of the data used for training of LLMs comes from Reddit. And the people of Reddit are also aware of this. So you have this huge bias, of course. And recently there was this trend of people on Reddit to just comment on, with 'Bazinga' on everything. So to flood the models with just the word 'Bazinga' and then, yeah, giving the data engineers at OpenAI a lot of headaches to get the data cleaned up. And that's also, yeah, a thing that OpenAI is apparently doing is ignoring the websites. So websites can denote which part of the website can be used for web scraping. And OpenAI is apparently being, has ignored part of these restrictions placed by websites to even scrape more data.
00:27:19
So, yeah, that's sort of the reaction against that. So try to get their data to be, yeah, less quality than, yeah. So actually the takeaway of today's podcast is just go on Reddit and write the future, write the history. No, write Bazinga. Right. Write the bias you want to be in the world. And what's next for you in your research, Sam? So I'm currently, in my research, I just didn't, I'm currently writing a paper where I evaluate a range of Dutch large language models, because a lot of the work is being done with the English models. But as I am from Belgium and I like the Dutch language, I wanted to also see, see what's out there in terms of resources for the Dutch language, because I am using Dutch-speaking participants in my own research.
00:28:17
So I needed a good language model to look at their behavior and their reading times. So for that, I evaluated a range of large language models in Dutch, and we're currently writing that up. And in the future, we would like to see if some research on how children read. So, to look at whether children, when they are reading, to what degree they make use of the context to predict the upcoming words, to perhaps compensate for their lack of reading skill at that point. Because that might also influence, if they appear to be doing that, that might also influence how we measure reading ability and reading comprehension. So that's sort of the direction we're going in now, to use these language models to sort of see how are children reading, what information are they using, and then to use that information to improve how we measure their reading, reading skill.
00:29:18
Oh, that's beautiful. Because I always hated the reading classes at school. And I think that there is also coming a lot of fear of public speaking, because at some point we all needed to read in front of the whole class. And it was; I think that this would really help the children, and also some traumas that are created when we are in the early age. Oh, definitely. I also really like what you said about language. Are we also speaking here then about the language gap? Because, for example, in the digital, we have a digital gap that there is like not enough, for example, gadgets in the different societal spheres, or that, for example, Czech Republic, it's not that digitized as, for example, Belgium.
00:30:04
But here we are also speaking about, language gap that not necessarily our languages have like role model, like you, for example, that would really go deeper into the research in their local languages, because mostly like, the IT codes, et cetera, are in English. I mean, I know also like Czech coders who doesn't know anything in English, and then they code and it use actually English inside the coding, but they don't actually know because they are just using coding as such. But do we actually speak also English? Do we speak about the language barrier here when we are speaking about the, the machine learning, et cetera? Yeah. Yeah. It's actually a very large part currently also of the literature to look at, languages beyond English, because English is of course the largest player in the scientific literature on reading and language.
00:30:58
And so a lot of people are also trying to step away from it, and to, to compare also how other languages, whether the same processes apply to other languages. And for now, they often use multilingual language models. But I definitely see some improvement that a lot of people are trying to get these language resources on par, or at least off the ground for these low-resource languages as well. Wouldn't it be just easier for the whole European Union to use the European language as well? Not really successful in like, speaking, but maybe like using it in, in like these researches and in the machine learning to have like our own representation of the European Union? Yeah. The Esperanto. Yeah. We tried that one. Yeah. Yeah.
00:31:52
I think it's actually just very nice to, to have this variation in languages and, and it's a very interesting thing to look at because yeah, there, there are some, some differences in languages that are also, in, in represented in the brain and across different, yeah, like there are some very distinct cross-linguistic differences in how people represent languages. So I think it's, it's an interesting research line to, to, to pursue as well. And there's also here as well, a lot of effort going into the data to make sure that there's reading data from many different, different languages. There are a lot of, lots of initiatives that are trying to gather reading data across many, many languages and to sort of get the whole globe represented in our research. So that's nice to see.
00:32:49
So here, here, do you see that like for every language, like we would need to have like an LLM model or do you think there's a future where one model can decipher multiple languages at the same time? The reason why I'm asking is because like, we, we have this device like Plout AI and it helps me so much with my meeting notes that I can just focus on the, on, on the other person speaking. But sometimes, I mean, you know how it goes, like you switch between Dutch and English and then maybe some French and then it doesn't work anymore because I can only select one language. But is there a future where it can just do that deciphering of multiple languages at the same time?
00:33:29
Yeah, there's actually also a very good question because in my recent research, I also compare how the Dutch language models compared to a multilingual model, and they actually measure up quite well. And the relations we-we-we see using the Dutch language models also hold up using the multilingual model. So that's very useful to see. And when it comes to the brain data, it appears that it is the rep generalizability of the representations by the model is what makes it fit to the brain. So I think in the future, there could be like a huge model that is trained on like a massive amount of texts that include so many languages that sort of generates a representation that is highly generalizable across languages that then might be useful just across all languages.
00:34:20
And here again, it's, it's just the thing of having enough data. Oh, so I think it's definitely a possible and definitely an interesting, definitely interesting line to, to pursue as well, to, to look at the, the multilingual and the generalizable language models. I have also a question because we know that the brain is quite a, like complex mechanism that like you have different heavy smears, you have different parts, et cetera. And also I think that language can play the role in how, like what kind of parts of brain we are using that, for example, Czech people would use it. Slightly differently than, for example, people in Japan that there are like, because they have it more like visual different kinds of letters, et cetera.
00:35:11
So how the neural link or like the chips would cope with these kinds of differences, does it somehow play also a role or it's just like, yeah, like the chip just copes with any kind of functions that the brain is doing. And it's not necessarily like, uh, doing any difference based on which country you are from or how, if you are more artistic or more, uh, mathematical person. Yeah. That's also a very good comment. Cause I think, um, what I understand from the literature is that the variability between individuals is a lot larger than the variability across languages. So two individuals speaking the same language might represent, um, the difference in their representations in their brain might be as different or more different than, than people speaking another language.
00:36:12
So I think it's not necessarily the language per se because the representations in the brain are not that, um, specific to one language, but, um, they're going to be very specific from one person to the other. So, um, so yeah, that's, uh, it's not necessarily a question of the, the language people are speaking, but the person that is speaking the language. Curious. Nice. But, of course, there will be some, some differences, for example, in, in, in, in some, um, some logo graphic, um, language like Chinese, uh, versus, uh, an alphabetic language. But I think those differences will be mainly, um, earlier or later downstream in the, in the language process. So it might be that those differences are mostly represented in the visual part and not necessarily in the part of the brain that represents the meaning of, uh, the text or the input.
00:37:12
Um, my next question is about, uh, your research, because, uh, you mentioned that you are working on research and, um, for example, from my experience or Cedric experience, um, we also did some research in the past, but then we were demotivated that, like, actually the researches are not really read or like you're really used in the practice. So, uh, is there like, uh, some change in, uh, the research, for example, in really high, uh, demanded, uh, subjects such as, for example, this one that can be really used, uh, in the future in some companies, do you see some, like, uh, really practical usage that, for example, the companies are like contacting you that, uh, you would tell us some, like, motivational, uh, motivation from the academic, uh, field?
00:37:58
Well, from my own perspective, I, I cannot offer that, uh, the motivation right now because, uh, I'm very early on in my research. So, um, as of right now, there are no, um, applications that are, uh, that can be used in, in industry or elsewhere. But, uh, I do have, um, an interesting case that I came across when, when I was preparing, uh, a lecture and it was, they, um, at the Dutch Forensic Institute; they trained the large language model on text messages, um, they hacked of criminals. So, every now and, now and then, the, the government gets to hack the secret message service that is used by criminal organizations, and they can then just read along. And then they also trained a large language model to, to sort of detect when there was a threat to life in one of those messages.
00:38:52
Oh, and then they used these models in combination with human raters to see: okay, the model would flag certain messages that it suspected were threats to life messages. Uh, and then it would be reviewed by a human rater. And then if it, um, it was deemed to be a real threat, they would intervene. And, uh, it was a huge investigation and a huge operation. And they, I think, intervened in like 10 different kidnappings or threats to life situations. So that's one case where, um, the language models were like really lifesaving. And then, in my own research, I try to, um, use and build models that are as small as possible and still work very well. So you sort of have the same restrictions as in that case, because you, you might have very little data, uh, very little text messages, and you need to train a model that represents the threat to life one and B is able to, to capture the, the critical messages.
00:40:01
And in the same way, we want to train a model, for example, in Dutch, in which there are not a humongous amount of data. We need to have a model that's trained on a limited amount of data and still represents language in as accurately as possible. And is able to capture the linguistic processes in humans as accurately as possible. So, there's sort of the same tendency, and the same obstacles that are applicable to both situations. And so it's an interesting parallel. You know, what I always liked about this kind of research is that, I mean, once you get a practical application out of it, like the impact can be enormous. It's just indeed like a road to that application in life can sometimes be quite long, but that's why I'm very happy to say that I'm very happy to have people like, like you,
00:40:49
Sam, that are willing to put the time and effort into continuing to make this investigation also to make sure that it's ethical and all, because I was not going to do that, but I'm very happy. We have people like you. So, uh, thank you for, thank you. Well, thank you. All right. So, uh, I have a last question from my side and it, uh, would be, uh, like, what, uh, would you advise, uh, people starting, uh, for example, with, uh, academia or academic career, how to overcome, uh, different, uh, like, uh, obstacles that, uh, bear on your path. So like, what would I advise, uh, the people who, uh, are younger and who want to do the similar kind of path as I do? Well, it's a, it's also a very good question.
00:41:37
I think the, the main thing in, especially in a PhD, uh, research situation is to have a supervisor that is, that matches, because in, in that very beginning stages of your research, you depend a lot on, on your supervisor and, and, um, your supervisor's network and, and what they, yeah, their, their advice during this process. So you need someone who supports you through these early stages, which is, where it's quite difficult to get started. And if you have that and you're just, if you can pursue then the things you're interested in with the support of, of someone more experienced, I think that's a very powerful, um, situation to be in and, and something to aim at.
00:42:21
So people looking for a PhD, uh, position, I would say, okay, follow your interests first and all, first of all, but also look at who will supervise you and get us some information about people, um, that work with them previously and take your time to sort of see that the match is good because otherwise, you might be, um, yeah, might be in for some unexpected and unpleasant, uh, situations. It's like in corporate, your manager matters. Yeah, it is. Thank you. Yeah, it can, uh, do definitely like a more difficult if you choose the, uh, not the right supervisor, and, uh, all like, uh, he or she doesn't match, uh, your values or the, the mission and vision that you want to, uh, that you want to proceed with.
00:43:13
I was always amazed how like a different system in like getting into PhD varies in different countries because I wanted to do my PhD in Canada. And, uh, they told me that, uh, before actually, uh, submitting my application for the PhD, I need to find, uh, my supervisor, but I was here in the EU and I was like, okay, how can I do this? How can I find my supervisor when I am here in the EU? And they were like, yeah, there is a list of supervisors. So you just write an application, what you want to do, what your research would be about. And so you just ask them if they want to supervise you. And I was like, yeah, but this is, uh, I mean, like I would, uh, like at that time it was like before COVID.
00:43:54
So there were, there were not a lot of like, um, like virtual meetings that you would just jump on the call and, you know, discuss it. It was really just, uh, based on emailing. Sometimes they didn't reply at all, or sometimes they were like, yeah, this is not really our, our area of research. So it was really a pain for me. Okay. That's a very difficult situation. And did you eventually, um, start with a PhD? No. At the end, uh, I decided to go work. Yeah. So, but it was, uh, it was also like, I got discouraged by a lot of people in my career that, uh, like, you know, to like my, for example, relatives or something like my relatives always supported me, but it was rather people like, uh, from my career path, like, uh, managers or with whom I was working, they were like, uh, discouraging me that, uh, it's not, uh, really debatable that they would, uh, go, et cetera.
00:44:53
And I also felt that this also like, it was super long, like, uh, four years. It's, uh, and I'm quite dynamic person. Like, uh, I like. Sure. Interest might shift across the. Yeah. So I would need some PhD with a bit of freedom. And then I, I tried also Cambridge where I could meet, uh, actually the GPA, but, uh, then they told me that they have to live like 50 kilometres around Cambridge. So I was like, no, no, thank you. Like, uh, for, for a minimum four years to live in a neighbourhood of Cambridge. I, I don't think so. Uh, yeah. So, yeah. So then, uh, it was not successful career, maybe in the future when there will be more freedom options, like that I can work and, uh, and do the PhD at the same time, because it would make more sense for me to, to do like a, based on practice.
00:45:45
So like some, like from the field of your like career, you see that there is some gap and then make a research about it. So I would like to have it also more practical. Yeah. Yeah. It's also a very valuable approach to like, to the, the direction from the industry or the. I don't know. How is it now? If there are more of the PhDs, like, uh, more keen to do so at that time. And I was looking for that. There was only one in the Netherlands, in the EU that was doing it. Uh, and I think in Maastricht, but then, uh, yeah, it was, uh, it was, uh, only one. And then I was like, okay, maybe in the future there will be more options. So, let's see. Okay.
00:46:27
So Rick, do you have some closing remarks? No, none from my side. I think it was really cool to hear more about the brain GPT, both the, uh, with the constraints, but also unshackled and, and going limitless, both the positive as well as what could be the potential, let's say, challenges or doomsday scenarios. Um, but it's, yeah, I think it was thought-provoking, uh, very curious and excited to, uh, to see what's more, uh, in the future with your research, Sam. Yeah. Thank you, I really loved the conversation. It was very nice. Very, yeah, thought-provoking also for me. So, um, yeah, I'm very happy that, um, that you invited me to, uh, to your podcast. Thank you very much, Sam, and see you, uh, maybe in Antwerp on Monday. Yeah. See you. Cheers. Bye.