Why do we need artificial intelligence and where is humanity heading?
Zarina Kopylova: “For me, artificial intelligence is, in essence, our journey to self-knowledge. In this way, we strive to understand ourselves. It is important to always turn deeply to yourself, to your intuition, inner voice and the legacy of your ancestors – this always helps to find your path. We are all familiar with the state of altered consciousness, when we can perceive the world in a new way. In such moments, we can immerse ourselves in the world of data that surrounds us. It is through ourselves that we can reveal the depth of life. Everything that we create is just tools that can help those who have not yet realized the full depth of their being.”
What does artificial intelligence give us in terms of practical commercial use of technologies and why do people need it?
Albert Golukhov: “Artificial intelligence is, of course, a revolutionary method of information processing, which stimulates colossal breakthroughs in technology and art. Why does AI need a moral code? Many of us probably read the works of Isaac Asimov in our childhood, who formulated the so-called three laws of robotics in the late 70s – early 80s. The first of them states that a robot may not harm a human being or, through inaction, allow harm to come to him. The second law requires a robot to obey human orders, unless this contradicts the first rule. The third law assumes that a robot must take care of its own self-preservation, again, unless this violates the first two rules. Of course, these formulations were made a long time ago, they are a little outdated. Today, a huge number of other dangers to society arise associated with the development of artificial intelligence. Therefore, AI needs a moral code to counteract the risks associated with the uncertainty of these technologies.”
How can we make artificial intelligence make everyone rich from a financial and economic point of view? Do you have any vision of this issue?
Nikolay Oreshkin: “Yes, I sincerely believe that in the next decade or two, any repetitive human labor will be replaced by artificial intelligence, because it will be cheaper, faster, more efficient and more reliable. I also believe that the management of our civilization can be delegated to artificial intelligence. However, there is an ethical aspect here: we have invested a lot of resources in writing books, but books cannot control us, we have spent trillions of man-years on their creation, but we are faced with the problem of passing this knowledge on to the next generations. If we entrust the management of civilization to artificial intelligence, we risk losing incentive, understanding of the management process and the transfer of knowledge to future generations. We can lose control. Therefore, we need to understand this ethical aspect: how to maintain motivation when we all become super rich? What to do next? We can reach singularity, a transition to a new level of development, and we ourselves will put ourselves in front of a risk. Are we ready to accept this risk?”
You are an advocate for the liberation of machines in the context of their monetization and algorithmization. Can you clarify what the goal of this is and why it matters to people?
Dmitry Trifonov: “People need me so that they can use artificial intelligence, paying for it within reasonable limits, and not rent supercomputers from large companies, for example, from Microsoft. Artificial intelligence, of course, helps us become more productive at work, and in some ways it increases the gap between people. Those who have access to advanced AI can do their work much more efficiently and quickly than those who do not have such access. This can lead to those who do not have access to AI performing less complex tasks and earning significantly less. I believe that everyone should have equal access to AI tools, and the rules of the game should be level for everyone. My goal is to help solve this problem.”
A few years ago you said things that seemed fantastic, but now they are starting to come true. Do you see something we don’t yet know? What should we expect from a moral and ethical point of view?
Alexander Soroka: “In my opinion, I am generally an empirical skeptic. While I believe that AI will penetrate every industry in the next 10 years and significantly change them, I do not see real AI on the horizon yet. Almost everyone who has had experience with AI understands that at the moment these are just statistical models and algorithms that can help us become more efficient and improve our lives. There is no real AI yet, and therefore no threat from it. Therefore, I believe that discussions about ethics in relation to AI are premature, and we should first sort out our own ethics.”