00:02:52
So what is currently happening in the data field? Because you are experienced. You are actually a person who is directly working in the data science. So what is the current issue in the field of data? What is happening in the data field? Well, that's a lot to unpack. But first of all, data work or data science, or however people market it, is all about getting insights, getting something useful, something intriguing, first and foremost, that will capture the attention of people who are involved in this, all the stakeholders, and also something actionable. And this often takes the form of something that people can use as a product or a service. And this is the end goal of any data science work, really. Of course, not everybody needs to do that.
00:03:45
Some people just need some insights to get an idea of what's happening in their company, in the market, in wherever they're exploring, to get a better understanding of how effective their marketing campaigns are, for example, and understand what data they have and what they can do with it. So that's the gist of data work in my experience. Now, of course, it's much more nuanced than that. There are different subfields that involve different parts of this whole pipeline. And nowadays, I wouldn't say it's a problem, as it is more like a trend, that the data field is more geared towards AI. And by AI, we usually mean the use of artificial neural networks, particularly very large ones that are very deep in terms of layers and connections, which is what people refer to as deep learning, deep learning networks, and how these are utilized in all kinds of processes, including data processing and data analytics.
00:04:51
So. Thank you. That's a short answer to your question. Thank you very much. And it's definitely really interesting to see like, I mean, AI is the language models that we are using for processing data, different kinds of data sets, etc. So it's definitely relevant to speak about this. So, and what are your usual clients that you are usually, because you said that you are a consultant. That you started your own, your own thing. So what are your clients usually? Different business people who want to do something related to data, particularly leveraging AI and other modern technologies, as well as people who want to learn more about data work and evolve as professionals in this field. So all of these people are people I work with.
00:05:53
Now, beyond that, I also appeal through my books and courses to all kinds of people who want to learn about this stuff. So I don't always have data tests or direct communication with people involved, but anyone who is involved either on a practical or a learning capacity in data work is someone I would work with. Yes. That's really interesting to hear. And now you are really active on LinkedIn. So definitely our audience that is listening right now can also connect with you on LinkedIn in the case that they want to see some of your work and what you are currently up to. And regarding the data, there is also a lot of really a lot of discussion regarding, for example, GDPR and also like how our data are used.
00:06:46
So do you also. Sometimes work in these fields, like in terms of the data protections, or would you maybe recommend something to our audience about the data protection and how to maybe use it in practice, how to maybe like also prevent some kind of misuse of their data? That's a very big topic that we're approaching. We're poaching now. But in a nutshell, I don't do GDPR. I do GDPR particularly because this is more like regulation compliance. And there are people who are better equipped at helping you with this kind of processes. But I do have experience in protecting and securing data from a privacy perspective. So when there's personally identifiable information such as name, credit card numbers, addresses, medical conditions, all this stuff that could be used, possibly to predict who the person behind the data point is.
00:07:51
This kind of data is often in need of protection. And I have worked with different techniques, including anonymization and pseudonymization to protect the data and the people, most importantly, behind the data from this kind of situations. So in that capacity, yes, I have. What were the two things that you said? There were like two things to protect the users or what were the two things you mentioned? Yes. Anonymization, which involves removing all the variables that have this PII, this personally identifiable information in them. And pseudonymization, which involves in a way transforming these variables so that people cannot use them to track down the people behind this data. So it is like anonymization. All right. But it is not exactly.
00:08:45
So it is like a compromise where you remove some information, but you keep the information you may still need in your predictive models, for example. Yes, this is also very useful for the websites, if I'm not mistaken. Like when you, for example, have a website and you are storing data about your users, etc. You can anonymize your databases in terms if you want to then give them to your subcontractors Etc Or super anonymize it if you want to So just also to explain to our audience the different terms and how they can use it So sorry that I jumped into your elaboration You can continue Yeah of course No no worries In the case of this example you mentioned anonymization would be getting rid for example of the name of the visitor if you have that or their email address if you have collected that from a web form or something The anonymization would be to mask the data related to their IP for example.
00:09:50
So you still show that, okay, this is an IP from the US or from Italy or from wherever. But you don't show what this IP address is exactly. So somebody who is doing analytics on your web traffic, for example, can still make sense of this and use it and say, okay, well, I noticed that we have an increase on the people who visit our sites from the UK or from Spain or wherever based on this IP information. But they don't know the specific IP of the people who visited because this has been removed, this information. So this way the people who visited the website cannot be tracked down. And still, the people who do the analytics and the people who own the website obviously get some insights about what's happening in their website and what they may need to do to improve, for example, traction in specific countries.
00:10:46
Yes. Thank you so much for sharing. And it's definitely a good practice. And this episode is called Thinking Outside the Black Box. So can you maybe elaborate what do you mean by black box? Well, the black box is a term that is commonly used to describe current AI systems. And it's basically when we have some kind of model and we don't know what's happening. In the model, we don't know how it comes about in terms of its outputs, like how does it process the data? And we don't know what variables it uses and how it uses that. So it's basically like a mystery. It's like a magic box. We don't know what's happening in there. Anything could be going in there.
00:11:36
Just an example from the early computing days, when IBM started creating these PCs, the personal computers, there was a black box in a way there where you knew that something was happening, but you didn't know exactly what kind of circuits it had. It's something similar to that. So modern AI systems, they may be open in the architecture. For example, we know how they're structured, but we don't know exactly what they do with the data they're given and how they come up with their outputs. And this may be sometimes inevitable, but it creates issues that we may need to transcend if we are to make the most of this technology. And there is already a lot of interest, not just among researchers in this field, but also in the business world towards a more transparent kind of AI where there is no black box or it's not exactly black, like we understand what's happening more or less.
00:12:36
We don't understand everything. And the title that I proposed for this episode is basically a play on words about thinking outside the box, but also the specific black box of AI systems today. Yes, for me, it's also like a really challenging and eye-catching title, The Black Box. So I really liked it when you proposed it. Because always people are saying, yeah, you should think outside the box. So it's a really well-known expression. But then like when you added a black box, it catches your eye that it's, so is there something else that I may be missing? And this is also what you just elaborated on, that you usually compare it together with the different emerging trends that there might be something hidden for the end user that they usually don't think about.
00:13:37
So I would like to ask you, that you've encountered in the daily process that what is happening with their data when they are, for example, discussing the certain aspects of their life with the AI model, but then their data are stored, who knows where, with whom, et cetera. So how do you continue to improve your skills in handling new technologies and big data yourself? When we are like speaking about the emerging technologies and that like there are lots of new things happening and evolution of the tech and new data. So how do you yourself, for example, improve your skills in terms of handling these new technologies and big data in the data science field? That's a very good question. And I think anybody who is in this field, ought to ask this question to themselves.
00:14:37
I try to read different articles, books especially, and also talk to other people who are in this field and to other related fields to get an understanding of what is needed from the field, what is required, what is expected. Because sometimes the technical knowledge can only get you so far. Sometimes you have to understand where the other people you liaise with are coming from and what problems they need to be solved through this field. So I try to understand and what their understanding is. Because you may know everything there is to know about something, but if you can't understand where the other people are coming from, it's very hard to bridge this gap of understanding because everybody has a different frame of reference nowadays, especially.
00:15:22
So I try to go deeper into these topics, talking with all kinds of professionals and getting to know the latest trends as much as I can, because I'm not a researcher so I can't know everything that's happening right now. But I get an idea of how the LLMs are evolving, what the latest ones are, and how they add value specifically to different organizations. And at the beginning you mentioned that there is a big thing about AI. When I asked you what is happening in data, you mentioned AI, but there are also other emerging trends, trends in the data science and AI that excite me the most from the perspective of a person being in the data field. So what is the most exciting currently coming in in the upcoming periods, months, years?
00:16:23
That's a good question. What excites me is twofold. First and foremost, the technologies themselves and how the methodologies around these technologies evolve, taking advantage of what the technologies can offer, but also the business initiatives that are built around all this. Because it's one thing understanding that LLMs can do this, that, and the other. But so what? Who cares, really? Because these are not just pieces of software developed for fun. These are designed to solve specific problems. And sometimes the problems they can solve go beyond what the initial developers of these technologies, systems, have envisioned. So it's always good to think beyond what the technology is designed for and understand how people are thinking of using it.
00:17:13
Because if you talk to entrepreneurs, for example, who want to leverage AI, they have a bunch of different problems they are tackling that were never tackled this way before. And that's intriguing in many ways because, first of all, it shows that the technology has promise and also that there is value to be derived from it. It's not just something that pleases the technologists, but can benefit everyone, especially the people who don't know about these technologies and never have to know about them. Yes, it's like we also had one guest in another of our episodes, which was about creative thinking, where a lot of people, they first should ask themselves what kind of problem they want to solve before they will get into the emerging technology or using the different softwares and everything that is out there online.
00:18:06
Like a lot of people, they want to process data at some point, like about their customers, about their clients, how to do the statistics, etc. But they jump on the wave of a new tool without actually understanding what is their problem and what they actually want to solve. In this sense, because you are a writer, you also wrote a book yourself, and you are a speaker. How do you balance technical depth with accessibility when communicating complex topics? That's a good question. First of all, the writing of a book is not just putting words on paper. It's also a lot of research that goes behind it. And that's how I get the technical depth that you mentioned and also the business depth, especially in the later publications. Now, the balance is tricky.
00:19:03
It always depends on how you view the whole matter. Because for some people, writing a book is a matter of putting something on the resume. For other people, it's a matter of a vanity thing. Like they want to have written a book by the end of their lives and they prove something to themselves. And for other people, it's a means to an end. And for me specifically, this end is helping people get more data savvy without having to go into technical details always. So once you understand this and you really connect with at least some of the people in the audience, because you can connect with all the different kinds of people who may be reading your books.
00:19:48
But once you connect with those representative people that you plan to target with your book, then it's easier to get this balance because it comes about organically in my case. I don't think too much about what to write. I think a lot about what to edit afterwards. Thinking about the people getting feedback from the alpha and beta readers and gradually working with an editor to refine the whole thing to make it not just interesting, but useful. And that's why I'm a very big fan of having a good editor. Alongside with you, even if it doesn't always feel great because they can sometimes give you feedback that you're not ready to receive. But through this process, you refine your writing and also your understanding of what all this project is about.
00:20:45
It's not about you at the end of the day. It's about the others, the other people who may read the book. And maybe you will never meet them. And so who is your editor then? We can, we can make a bit of promo for your good editor that you are working with. My editor is Steve Hoberman and his team. He has this technical publishing house called Techniques Publications and it's based in Arizona, US. Interesting. So, in case that some of you who are listening right now if you are looking for an editor in the tech industry, in the tech field, then definitely reach out to this one. And can you tell us more about the book? So, what is it about? What is the, can you tell us more about it?
00:21:36
Are you referring to the latest book or which one? I am referring to the one 'Question-Driven Data Project'. So, is there some more reasons? How many books did you write? So first question. This is my latest one and it's more like a business book with a lot of details about the projects and not so much about the methodologies. But I have written before that one about a different way of tackling data problems through heuristics, The Data Path Less Traveled. And before that I had another business book. No, before that was The Machine Learning Use in Julia. And before that there was another one that is about bridging the gap between the business and the data world, Data Scientist Bedside Manor. And of course there was the AI for Data Science book.
00:22:29
And this along with Bedside Manor I co-authored with another data scientist. And before that I had three more. So yeah, it's all together about eight books. Oh, that's a lot of work then. And all of them are in more or less like about IT business, like how to understand the tech field in a more newbie way or because you said that you are bridging usually like IT with the business that you help navigate people, the digital landscape that they don't need to be really that much tech savvy to, for example, understand inside your book what you speak about. So are they all in the similar kind of thematic or it's really different? They're a bit different. There are some books that also have code in them.
00:23:26
Like every chapter has some code notebook attached to it. And there is code also in the books themselves in the pages. But there are some books like the latest one that are more like normal nonfiction books that don't have any code attached or even referenced anywhere in the books. So these appeal more to a higher level kind of thinking. That is often necessary as well when you're dealing with data because not everything is just ones and zeros. Sometimes you have to be able to talk to different stakeholders and different people in a team that may not be able to understand the science, and you don't expect them to understand everything that the science involves. Otherwise, you'll be talking to the scientists. So, yeah, I try to bridge the gap in different ways.
00:24:15
The latest book attempts to do that on a more business level. Interesting. And so what, like as you, like as your author and a speaker and experienced in academia, but also in your professional life. So like what would you recommend? What would be the first thing for the people who wants to jump on a digitalization wave? Like what would be the step number one for the people who wants to digitize their skills and who wants to get more digitally savvy? I would start with what you mentioned a few minutes earlier about the creative approach, which involves understanding the problem first, because it's easy to get lost in the different possibilities that technologies offer. But I would start with a problem. What problem I would try to solve, then understand how is this problem solved right now because or at least tackled.
00:25:16
Because if it's there, somebody is trying to solve it for sure. Unless you have discovered a brand new problem that nobody else has thought about, in which case that's something else, but it's very rare. So how are people tackling this problem? Is there some kind of process in place that manages to either circumvent this issue or tackle it head on and then see how we can improve upon this? Is there a way to patch it, make it more efficient or something? And then see, OK, if nothing works so far that is satisfactory, see if we can find a new way to tackle the problem. So that would be my process for this. And digitization is just one way to do that.
00:26:00
But sometimes improving upon an existing process may involve also a new design of the process, which may not be so technology related, but it may involve new technologies. That's a really good answer. Yes, it's a really good answer, really universal to all our listeners. And what piece of advice or a principle you would live by that has helped you succeed in your career? Because we discussed a lot about your career, that you really have the PhD and you have done a lot in your career. What is the principle you live by? That has helped you to succeed in your career or something that you are telling to yourself? Some affirmation or something that you would like to share with us that helps you?
00:27:01
It's hard to distill everything in one affirmation, but I would say first get something working without any issues and then try to make it really good. Because it's often the case that, especially those of us coming from an academic background, we may want to go for the perfect solution and something mathematically elegant even. But sometimes this doesn't work or it takes forever to come about. So at first try to get something that works, it computes without any bugs, without any issues, and then refine it over time. There is no shame in having multiple iterations of a data project. Actually, it's a sign of a successful project if there are multiple iterations, as I talk about in the book. So don't try to get everything done at once.
00:27:54
The Pareto principle is a very wise principle and it doesn't apply only in economics, but also in this kind of projects. Because you may just need to do 20% of the work that you imagine to get most of the results. And start with that. And it's okay, because you can always mention the limitations of the current models, that you build, and the current processes that you develop. You don't have to get everything done at once. So that's where I try to live by. And so when I create a new method, for example, a new algorithm or whatever, I start with something very simple. Get it to work, refine it, then put some documentation as well. And then maybe months later, refine it even further. So otherwise I just waste everyone's time.
00:28:42
Yes. It's also really like, a lot of people nowadays, especially online, they want to multitask. Like you do so many things at the same time. And it's sometimes really better to just breathe and just do one thing and focus on one thing that you do right now. What helped me a lot is to track my working time recently. That I really put a timer on my phone and I really just track the working time. And it's like, it gives me also really fast dopamine because I see like how the timer is going on and that I want to complete, for example, certain amount of the work during the day. And this also helps me to really focus on the certain task of the day that I can dedicate this amount of minutes, hours or whatever.
00:29:35
So this is my recent hack that I learned and that helps me a lot with focusing. And so also like when we spoke about your books and the data and how the data are processed via the artificial intelligence and that a lot of people, they want to use the data for being, for example, monitor the success and getting something useful. Can you talk about the role of storytelling in data science and why it's crucial for making data accessible to stakeholders? That's a very good question. Yes. Storytelling is an inherent part of the data science work. And it's all about making whatever you have done relatable. So when I present, for example, a Jupyter notebook to the people I work with, I don't linger too much on the code.
00:30:34
Unless somebody wants to go more into detail about it, I can show them, but I focus more on the outputs. The rationale, first of all, why am I doing this thing? What am I trying to get? The problems I encounter and make it like an adventure. Of course, I may not be as gifted as other people who specialize in this. Like there are people who are specializing in storytelling and visualizations, for example. But everybody in data science, I believe, has this as part of their job because that's how you convey the insights. The insights themselves may not be as interesting unless you understand the problem before and the challenges and how they came about and how they can be used and refined and what other things could come about as well.
00:31:21
Because when I present the project, I don't just say, OK, that's it and I'm done and see you in a few months. No, it's a continuous process. So I always have some next steps kind of thing at the end, saying, OK, this is what I have done and this is what I plan to do if we continue working on this. Or if somebody else gets involved in the project, this is what they could do as well. And this is how it can be refined. So that's part of the whole storytelling thing. And I'm still working on improving that. And it's something that can be improved indefinitely, in my view. Yes, it's also really interesting to, for example, use the different comparisons for the different stakeholders, especially if you are working in the IT sector, because we use a lot of tense words in the IT sector, like 'scrum' or 'tickets'.
00:32:17
And even though in our minds, it's totally clear what we mean by it, usually business people, they don't understand the IT language. And then when we are presenting them data, for example, on how many bugs were fixed during the past period, how many tickets were raised, what is happening in the Scrum currently and the different data, it's important to provide the storytelling to the different stakeholders to speak their own language, because businesses will always better understand and give priority to the stuff that they understand themselves. I mean, if you speak a foreign language to them, then they will never give you that bigger budget for the things that you want to do and how you want to upgrade the different systems in IT.
00:33:09
So I would say that storytelling is a really pivotal part of the project management and for any IT project or even like you, like people who are currently listening to us, if you have any kind of data, just always think about the story behind it, like compare it with the different periods, what the data means of this period, whereas in comparison with the previous period, maybe it's fun to give it a story because this is what makes data, for me myself, interesting, to be honest. And if we are now changing a bit to the more futuristic scene, like really about the future, etc., if you could work on any data science project, no matter how ambitious or futuristic, what would it be?
00:34:04
Like, if you could really right now like jump on some data science project that you heard about in the news or it's currently like cooking or it's not yet even here. So what would it be that you would be choosing? Well, there are several that I could choose from, but if I were to pick one, I would say transparent AI. I know it may not sound as exotic or something that merits a film created around it, but in my view, this is the next step in the development of the field, particularly if it chooses to continue being tied to AI, because at one point, AI becomes a liability, a liability in different applications in the autonomous systems that we have.
00:35:10
For example, they may all look futuristic and great and perfect on paper, but in practice, they don't always deliver exactly what they are supposed to deliver. And a big part of that could be because of the lack of transparency in them. And that's something that is really hard to tackle. But if there were people dedicated to doing that, maybe even redesign certain aspects of AI and use the existing AIs as lessons learned to see if they could reduce certain parts and get rid of this black box, that could be a project, a long-term project worth participating in because we can talk about AI safety all day long. And there are people who actually look into that nowadays and there are, there are whole organizations that are focusing on tackling this matter.
00:36:09
But, you know, the high-level stuff can only get you so far. At one point, you need to get your feet wet and get into this dirty lake of AI and see how you can either clean it or just develop a new one that is clean by design, so that you can see through it what's happening there, and you don't have any nasty surprises as you, as you go into it. So the transparent AI means that like we are really like uncovering what is inside the artificial intelligence model. Like for example, let's say ChatGPT. If you were working on transparent AI with ChatGPT, it would mean that you would be taking their data and you would be making them transparent for the wider public. Or what does it mean, transparent AI exactly?
00:37:07
Well, I was thinking more into the AIs that are used in analytics, but going to the LLMs use case that you mentioned, this transparency could take different forms. So for the people developing it, first of all, they could have a very big access, deep access to the system. If it were to be more transparent, they would be able to understand everything that's going on in there. Now, this may not be of interest to the end user, but the people working on the system to improve it, they would have this full access and this way avoid potential biases, which are a big problem in all kinds of AI systems today. They would avoid potential hallucinations that often come about in these LLMs and all kinds of issues that may arise based on, on what they observe happening as well.
00:38:02
But if they have a fully transparent system, they may even anticipate problems that haven't happened yet. And this way, save the trouble to the end users. As for the end users, transparency in a system like this would be to be able to, to get to the sources of, of the outputs. So there is a lot of emphasis on prompt engineering, which is great, but how about getting deep into the, the rationale of, of the AI system? Like, why does it say what it says? Like, is there a place where we can go and delve deeper into this stuff? I mean, there are online AI systems that do that to some extent, but they don't always give you a specific site, for example, that you can look, or a specific data repository that you can perhaps investigate to get more into the depth of, of this reasoning.
00:38:56
Because the AI may be able to, to talk well, but even if it wants to explain itself, I don't think it can right now. I mean, it can to some extent, but it can't tell you, oh, okay, this is a specific data point or data points that I used to come up to this conclusion or to this advice. And being able to, to get this kind of transparency in a system may make it much more reliable, and less risky in my view. I really like what you said about avoiding biases. However, I still don't know how we are avoiding the biases because we know that, for example, there is a really big gender gap in the AI models, for example, in the data that the AI models are usually using lots of data, which are concerning men, that we have, for example, more data about medical surgeries being done on men than, for example, on women.
00:40:04
And it's just a global problem, right? Because of the different legislations and women didn't have the human rights in the past, etc. On the global scale, it's just a fact that there are more facts about men. And currently we are testing, we are AI models on these data that are available out there. So how can we be sure that, we avoid the biases for the certain models when, for example, even the person might be a man who is actually doing the data cleaning and is responsible for avoiding biases, but to whom it doesn't need to necessarily seem really something wrong that there are not women data included in the certain models. So how we can really avoid the biases so that the AI models are really without bias?
00:41:04
I don't have an answer to this, but I can offer some strategies for mitigating these biases. So I don't think the AI systems are at fault here or the people behind these AI systems. Sometimes the biases are inherent in the data sets that those systems use. So when an AI system learns the ways of our world, it learns through data. And this data hopefully represents this world accurately. However, that's not often the case. As in the case you mentioned with the gender biases, but there are also other biases related to race, for example, that are difficult to tackle because the data itself has those biases. So one way that I would propose is to examine the data sets and make sure that they're more balanced in terms of this or the other variable that has the biases.
00:41:56
In the cases of men versus women in the medical records, we could look into data sets that have equal number of cases as much as possible for different conditions for the two genders. The same with other variables, such as different cultural backgrounds or income levels, etc. So if we have a more balanced dataset to start with, it's easier to mitigate the bias issues. Although there may still be biases, but maybe they would not be as pronounced and maybe they would be less likely to occur once the model learns well from this data. Because if the data is not very good, there is this GIGO rule, garbage in, garbage out. So no matter what you do, if you give it garbage, it's going to produce garbage. And the garbage in this case is the biases.
00:42:55
And it's also important because it's not an easy problem to tackle. But with proper data engineering, I think it is doable to some extent. And do you have some techniques or tools that you recommend for identifying and reducing bias in data sets before training the AI models? Conventional EDA work is a good place to start. EDA stands for exploratory data analysis. And also some heuristics. So there are statistical metrics that people can use, but also you can make up your own metrics to scout for the specific biases that create issues in the data set. It seems like a fairly simple problem in principle, but the more variables you have, obviously the bigger the chances of these biases coming about.
00:43:46
So it's not something easy, but if people devote themselves to this and take it seriously, I think it can be tackled to a large extent. And can you share some example of a project or system where AI bias led to unexpected outcome? One of the recent projects I was involved in involved the prediction of different fatigue levels based on heart rate data. And I was working with a bunch of different heart rate variability features. And although the features were fine, there was a big bias in the data set that was not easily pinpointed. And you could pinpoint it, but you didn't see it as such a big problem because the data set involved different tiers of fatigue, but the lower ones, like very low fatigue, were much more pronounced than the higher ones.
00:44:52
And even if you were to group the higher ones, into one big bucket of saying, this is high fatigue, and have a more or less balanced data set, it still wouldn't work very well. So eventually I had to give up on this because there was nothing worthwhile in terms of prediction performance. However, once I said, okay, maybe we should look at this another way and try to alleviate the bias through the use of synthetic data, then the whole data set became much easier to tackle and ended up producing a pretty decent result. All right. That's really interesting. And I have a last question, and it's if you could give one piece of advice to aspiring data scientists or AI enthusiasts looking to make a positive impact, what would it be?
00:45:52
I would say start with first principles in whatever you do, because it's easy to get fascinated by the latest and greatest methods and technologies. And this may be a good motivator, but I wonder if it's something really sustainable because once the gloss of the novelty wears off, what will keep you going? Well, if you tie things to the first principle, and to real problems that you can understand or at least relate to some extent, then it's easier to fuel this motivation and have a long-term effect in your learning. Because the technologies themselves are great, but what's even better, in my view, is what they can do, for the people. It's not the technologies themselves that are fascinating, but their value.
00:46:57
So if you attach your mind to this end goal, this value-add of the technologies, it will be easier to carry on no matter how they change or how things may get more complicated. Yes, it's like making yourself a superhuman, because the technology is here for us to make the tasks that, for example, are not your favorite or are repetitive, that we can automate thanks to the technology that we can, for instance, use it for summarization. In normal days, we would use, for example, an intern or an assistant, and now we can actually use the AI too. Because my AI is my assistant, it helps me with emailing, it helps me with everything, and it can really make us superhuman. So I definitely, definitely agree. And that's a wrap up for today's episode of Create the Future Now.
00:47:57
I hope you enjoyed the conversation and gained some valuable insights into the data science, especially like how data is used in the AI, what are the AI biases, and how you can avoid the bias in your, for example, AI models or what to watch out for. Even we dive deeper into, for example, how to protect your data, with anonymization, and even much more with the practical tips and tricks, such as, for example, just get one task at a time, because multitasking is definitely not the mindset that you want to apply, especially to data. I cannot imagine this to apply to the data that I would be jumping from one to another. It's really a lot of training for your mind to keep narrow towards your goal.
00:48:48
And as always, we are here to help you know, navigate the evolving digital landscape and create an impactful future with technology. If you loved today's episode, don't forget to subscribe, leave us a review, and share it with others who are also excited about innovation. You can also follow Innovatology on social media for more updates, tips, and behind-the-scenes content, as well as other things. And Zach, do you maybe want to mention something else to our audience as a last comment? Yeah, if you want to learn more about my different projects and books, you can check out my personal website, Z as in Zach Voulgaris. Wonderful. Thank you again to Zach for joining us today and sharing your expertise. And thank you, our listeners, for tuning in. Until next time, stay curious, stay innovative, and keep building the future. So, see you next time. Thank you very much. Bye.