Introduction to AI Regulation
Ryan: Welcome everyone to the Tech Business Roundtable Podcast. I’m your host, Ryan Davies, and I’m hosting today’s discussion on the effects of AI regulation on businesses. Here we go. Kanstantsin Vaitsakhouskim. How close was I? I’m really getting there. We’re going to go with Kanstantsin because Kanstantsin is kind to me, and he allows me to give me the nickname so we can make it easy. So, Kanstantsin, you’ve had 12 years of experience in software development and R&D product discovery, which is a big focus on AI projects and products. Multi-domain experience. We talked even before we started here; you’ve had a variety of industries you’ve worked in and done startups in, and it gives you a really good non-standard product vision that goes all the way from ideation to research, to launch, to support, to post-care, to exit. Like you, cover it. All right, you’re really big with deep analysis, your technical excellence, innovation, and long-term business value. Right now, you are with APRO software and one of the projects you’re working on at Adventum.ai. We’ll dive into that, but Kanstantsin, thank you so much for joining today. I can’t wait to talk about this. This is a huge topic with AI and the regulations that are impacting businesses today.
Kanstantsin: Yeah, great. Thank you for your introduction. So everything is correct. And I really like all this stuff about AI and all kinds of businesses that are connected with AI. That’s my question.
Ryan: Perfect. And that’s a great place for us to start coloring up the background more in terms of what got you into this AI and really got you excited to take this to the next level.
Overview of Kanstantsin’s AI Expertise
Kanstantsin: So AI right now is the most advanced area on my side, and I really like it. I really have about five years of experience working closely with AI projects from different domains, from MedTech to healthcare and security and fintech as well. This Adventum.ai project, which we have already taken out a little bit, is a really innovative system for personal investments. That’s why we are trying to use cutting-edge mathematical approaches to transform them into algorithms on a very precise and deep AI basis. Basically, this topic is also really interconnected with the regulation that you mentioned.
Evolution of AI Regulations – Global Perspective
Ryan: This is such a massive topic, and I think this is a big thing in terms of tech businesses, for founders, for small tech, like for people who are in startups right through to the largest tech and nontech companies in the world to be able to understand the effects that AI regulation is having on businesses. So give us a bit of a background, where, I think people know, AI started it kind of ran wild, and then governments around the world went. There could be some clamping down we’ve got to do. But what do you see in terms of AI regulations and the evolution of that to where we are now?
Kanstantsin: So, there are some at the beginning. For example, AI is not a new technology, but in recent years, we achieved quite significant success. So it’s nearly singularity. The progress goes so far and so quickly that regulatory institutions and governments have decided to make some kind of regulations and rules on how to play with this technology. But the big problem is that nobody knows how to regulate this domain because it’s so new, it’s so rapidly developed, it’s so cutting edge that it also requires new approaches to regulation. People are still determining the exact ways and approaches for this. So it’s just very general words, but this topic just risen, and we have two big regulatory centers right now in the world. The first one in the USA, and just this week, Joe Biden announced a huge chain there. That’s mostly for huge technological companies that developed really cutting-edge solutions like Open AI, Microsoft, and Google. And they now must report to the government about their cutting-edge huge AI systems before going into the market. And the second center. This is the European Union. The European Commission is the headquarters of the European Union, the main organization in this Union. They are also trying to regulate AI in all the domains of AI next year; in the first quarter, they launched the AI Act, the entire law that will regulate services related to artificial intelligence.
Recent AI Safety Summit Insights
Ryan: You know what and how, how incredibly fitting it is that we scheduled this a while ago. For us to chat it here today. But just this week, we had a huge AI Safety summit that took place in London where you had China, the US, and the EU all together shine some light on some of these things that are happening and the reasons for that.
Kanstantsin: That’s true. That’s unbelievable. Even China and everybody considered the same thing; they decided they made a common statement, and they accepted the act. So this is something unbelievable, this is a really unbelievable political event that,
Ryan: you don’t see that very often in China, the US, the EU, Europe, and everybody working like that, right? So, I think they had Elon Musk, they had every big player here, to take part in this. So what were some of the takeaways from that you took away from it? I know again, it’s fresh like this. It was literally a day ago that we’re talking about that it happened. But were there any key takeaways besides that piece of China’s willingness to cooperate, which is huge? Is there something else that may have come out of it, or is this just to show that rapid development needs everybody together on this front?
Kanstantsin: So, right now, these takeaways are high, quite a high level, and quite general. But anyway, everybody accepted the necessity of regulation and compliance. So, they raise a common statement. Also, the USA announced the creation of a new institution, a special organization that will regulate this domain. That’s quite interesting, and maybe they will transfer the experience from the European Union. Also, they accepted the participants of this summit, the necessity of the very careful developments of AI the drivers, and discussed a lot of important questions. Unfortunately, a lot of these questions are still some kind of secret because I check them through different sources. And I saw only very general information. But it’s really incredible.
Impact of Regulations on AI Development
Ryan: Yeah, I think the key is, you look at trying to regulate AI, and one of the problems is what is regulated. Europe is probably the strictest right now. The US is a little bit behind that. In Asia, it’s a little bit of the Wild West in some places in terms of what you can do. Yeah. I know that you and I have talked before about this and in other conversations just how challenging it is for a business to be able to recognize where the market is and what the regulations are there. So what have you faced around that? I know you’ve got, again, lots of experience in this. How do you cater to that?
Kanstantsin: So that’s quite, that’s a really interesting question. That is a very important question. So we can divide the entire world into three future regions, the first region. That’s the European Union right now. The upcoming regulation in the European Union will be the most strict, and it will regulate everything about artificial intelligence. The second region is the USA; you have another system and another approach, and you are not going to regulate everything. You have more like a specific approach. So you have the issue, so you solve the issue. You don’t try to solve everything in advance, including existing issues, and the third is the entire world. We should just listen and watch how this will go in this, like big two centers of the technological development because in China, for example, in India, this regulation is just an idea, and even like last week and previously, I heard a lot of concerns from different experts that if the European Union if the USA will make really strict regulation in the AI technologies so that China will have a huge advantage because they have really a freedom to develop it. But this summit in London shows that even China understands that it’s very important and crucial to make it safe, and this is a really great result. I have huge respect for them.
Ryan: That’s great to hear. Because I think a lot of people in this space are scared of regulation, they don’t want government overreach, or they’re worried their business is going to shrink potentially or whatever it is. But I mean, again, you’ve got lots of experience in startups, especially in AI and in this space, Adventum.ai being one of them. So, let’s use that as a real-world example. Tell us about Adventum.ai, what AI regulations exist there that impact your business, and how you are monitoring them. You can stay ahead of those regulations to ensure that you’re not stifling growth.
Kanstantsin: So having this like care board example, it’s really hard to stay early ahead of this regulation because everything is very beginning. Nobody knows how it will work in reality. But anyway, you can be prepared. So about all this, like the public state, this information is already publicly available, and how will it work? So, for example, in the European Union, you must remember if you’re going to work with the European Union or with the citizen’s companies, with your area and services, it doesn’t matter where your city is located. You must be compliant with the European Union regulations. So, with the USA, the situation is a little bit different. I’m located in the USA, sorry, in the European Union and Adventum.ai as well, and we must comply with it. So, the European Union divided AI services into several categories by the risks that they can provide.
For instance, imagine shifting from an inefficient system to cases akin to mine at adventum.ai, where we focus on investments. In our startup, we provide suggestions and predictions. It’s important to note that we’re not dealing with actual funds; instead, we’re essentially working with future scenarios. This is a bit of a tricky question for me at the moment. On one hand, this system might be perceived as high-risk, and on the other hand, it could be seen as a moderate-risk system. Regardless, it imposes additional responsibilities to ensure transparency. Demonstrating transparency in this context involves showcasing the workings of the system and explaining why it makes specific decisions in a precise manner. Also, it means that I have to track everything from data to the algorithms and changes I have with an entire product life cycle. It is extremely important. This is crucial. Basically, this is an answer to how you can be prepared for such things and how you can prevent possible regulatory issues because, at the end of the day, we will have it. So, basically, the first step is to be 100% aware of all your data. That is, this data was updated in a 100% legal way so that you don’t have any privacy issues, violations, or copyright issues, and only this way can you use this data totally safely. So, as the next step, you must be sure about the technologies and algorithms you use. So, you must track all the changes, and you must keep a culture of transparency inside your company because all the development processes that are connected with AI must be extremely well documented. AI right now, it’s not a black box. Because you will need to explain how it works to the year commission, for example. So the level and, let’s say, the thickness of this explanation basically depends on the risk level that you can potentially make. Basically, this is just a very brief example of how you can never add these possible issues.
Balancing Regulation and Innovation in AI
Ryan: And so, where are these regulations? Do you think they are going to have the most impact on a business? Is it around marketing, like you said, data analysis, and being able to store data or use data? Is it customer communication? And like where are you finding or is it just going to be, the regulations are going to be wherever you are in your business, there’s going to be some silo that this is going to impact if you have an AI element attached to it.
Kanstantsin: So if you have the AI, it will impact you. But the level of this impact really depends. For example, if you have an educational service or a service that works with employed processes or some kind of law enforcement service. This kind of system will be considered a high-risk system, and you will have to provide an extremely detailed explanation of how it works and how to make a decision. What is your data, and so on and so forth? Also, if you work with such systems, maybe like national security agencies, there are some tricks so you can avoid such regulation at all. A little bit. It will not be so public. I think there are not a lot of businesses that work with national security in our world. So, if you develop generative AI services right now, it’s extremely popular. I think even each big company doesn’t matter what you are doing, like toys, furniture, or cell phones; everybody wants to have their own generative AI system, like for images, for text for a house.
Generative AI has a special paragraph in this regulation and the special regulation itself. Basically, there’s some kind of requirement for the model design to prevent it from generating illegal content. So, you must disclose that AI generated this content. Basically, this genera this regulation makes it a human-centric approach. The idea inside the Euro Commission is that AI mustn’t replace humans; it must help humans. So it means that if you would like to replace somebody with an AI system, that’s not a very good idea from this point of view.
Ryan: I think that’s, that’s perfect. That we could talk about, I think, the benefits, and that’s one of them. Is that we want to mitigate that social, economic impact, like you just said, of people being taken out of their jobs or, well, what that scalability of that happening is, it seems like society comes to a crashing halt at that stage. And you mentioned the point of it is so that you have an ethical use of AI, you’re safeguarding human rights and human safety, and you’re enhancing what we’re doing. We’re trying to speed up innovation and things like that. The other argument, though, and I mean, I’d like to get your opinion on it, is people saying that too much regulation is going to stifle that innovation that will actually prevent growth from happening. Do you see that as the case?
Kanstantsin: Unfortunately, I see this case. So because in regulation, it’s additional bureaucracy. And the additional bureaucracy, that’s additional spending, additional cost. It means that companies who develop AI solutions will have to invest more in such solutions just to pass these regulatory issues because you must put more money into these bureaucratic processes; you must change the development process, data obtainment, obtain process, and so on and so forth. So that’s a very tricky and interesting question. So also, because it’s obviously the very, very beginning, nobody knows how it works. Right now, it’s a little bit probabilistic, but people are really quite nervous because there’s a reason that somebody says, oh, you have a nice solution. So we just want to regulate it, and you must do nothing until we accept you to continue with your project that’s called. So that’s a chance. This also means that for small companies, for these small startups, in some cases, for example, if they develop such services for education, which is a high-risk service, it will be extremely expensive or even impossible to pass this regulation just because of like financial issues, timing issues and so on and so forth. So, there is a risk that this regulation will help huge companies grow. So basically, we have a risk of monopolization. So that’s what I see here.
Future Perspectives and Mentorship
Ryan: That makes sense. If you’re trying to bring this into your business or if you’re a startup and looking to take advantage of what AI in any aspect of your business can offer, it can become prohibitive, right? Or if you’re starting up an AI business, you have to be careful not to get too far ahead because you could spend and spend on something that is never allowed to see the light of that. So, for our listener’s sake, I think this is a great kind of thing to tie it all together. I know you’ve touched on it a little bit, but how do you handle risk mitigation? That’s if you’re mentoring somebody, and they go, I got this great idea. How do you tell them that? Hey, step back; here are the things you need to make sure before you get too far ahead of yourself. Because people get excited, right? And there I got the idea. Here I go. And then all of a sudden they’ve spent, like you said, millions of dollars and then turn around and go, and I can’t use it.
Kanstantsin: Yeah, exactly. So basically, during this communication process, during the mentorship and communication with partners, we decided to discuss this issue to touch with them to modify their processes, starting from data collection to be sure that everything is in the right order and that we have all the documents that we are able to show that. This data set was obtained from this organization by these people. Originally 100% legal for these out, we use this model, which this company developed. It works according to this scientific paper, so everything must be extremely transparent. So we are trying to create a really quite proactive approach in the description of our systems with the documentation, with the project development processes, with the product development process, and with the communication with partners and developer teams. So, all the developers know the risk. This is also crucial because people who are in charge of the development must know this; they must understand it and must take it into account when they prepare the source code itself. When they make comments inside the source code, it’s a very, very interconnected process. So because at the end of that day, you will be able to show that you take care of all these small steps about all this security, about all these AI safety things from the very, very beginning. It would be really great, and maybe it will help you to save a huge amount of money during the regulatory or compliance process.
Ryan: Employing AI deployment can be very cost-effective, and it could be very good for your company. But it could also be if you do it the wrong way. Like Kanstantsin just said, it could be deployed the wrong way, too. So you have to be very careful about that. You talk about mentorship; you talk about learning from somebody who knows, right? And here at Tech Business Roundtable, that’s what we want to do: connect people in that format with that in mind. I mean, your expertise. You’ve got a lot of it. I’m sure people are listening right now who are going. Ok. I need to ask a couple of questions here on this one, Kanstantsin. How can people get in touch with you and learn more about what you do? Take advantage of that opportunity to learn more from someone like yourself.
Kanstantsin: So you can find me on LinkedIn, And I hope in this podcast, I have some kind of information about me. Also, you can find the website of my current company, APRO.ai. You can contact me on this website. I will be very happy to share my insights and my experience with you.
Conclusion
Ryan: That is fantastic. All right, here we go again. Kanstantsin Vaitsakhouskim, how close am I getting? Am I getting closer? Am I getting away? All right. That’s good. Well, you know, we’ve had a great conversation here today. I’m hoping we’re going to have you back. I get to see more of you and have more great conversations because I think in this world as it continues to grow. I mean, again, like with the AI summit that just happened, it’s funny because I keep thinking that sometimes we’re deep into regulation, and you always bring me back. We have a conversation with Ryan; we’re nowhere close. We are at about 1% of where we need to be. And we are really at this beginning stages, usually unstoppable topics. So, I think we’re going to have more conversations on this, and I want to take a minute to thank Kanstantsin for this amazing podcast on the effects of AI regulation on businesses. And I also want to thank our listeners, so we can’t do what we do without you, Kanstantsin. Thank you again for being here. Can’t wait until next time.
Kanstantsin: Thank you for having me.
Ryan: Perfect. So, until we meet again with another amazing TBR episode, I’m your host, Ryan Davies. Take care everybody.