On March 4th, we hosted the GPT Party for over 400 participants, which featured an engaging panel discussion on artificial intelligence. In this and upcoming articles, we continue to share the most interesting moments of the event. You can read the first article here. Note: the speakers’ speeches are not quoted verbatim. ChatGPT rephrased them in a clearer and more understandable text form.
Igor Shoifot: I think the level of fear about what artificial intelligence represents is increasing. It definitely has for me. If five years ago I were a charlatan on this panel saying, “Well, guys, artificial intelligence, chess, and so on,” I think I’m different now. Let’s consider two things. First: I think many of you are familiar with the work of Nick Bostrom. He devotes his life to studying existential risks and the risks of human existence. He has an interesting concept about how, as we progress, including technologically, we’re pulling balls out of a black box. He calls them very originally – white, gray, and black. A white ball is something useful, like penicillin, which is harmless to anyone. A gray ball is something that could be useful but also dangerous, like an onion that can be used not only for food but also for hunting. Finally, a black ball is something very dangerous, like the ability to create viruses. Second: Garry Kasparov still contests that he didn’t quite fairly lose at chess because the computer was used to aid his opponent. He says that even an inexperienced player with the third youth rank can beat a grandmaster with a regular program. It’s pretty scary because if you combine the possibility that we’ll soon pull out the blackest balls with the likelihood that superintelligence will quickly self-improve, something that no one will like will happen.
Ruslan Gafarov: “I think that we are not the ones who create trends. I am a bit of a blogger, creating content that, in my opinion, will be interesting on Instagram or YouTube. However, I understand that the task of artificial intelligence on Facebook or YouTube is not to help me create content that will develop people. Artificial intelligence creates trends in which I will create low-quality content. There is an experiment in which women walked half-naked around a man, and his IQ was reduced by half. And this is a fact. Imagine that Instagram and YouTube are a metaphor for this experiment. They make women lower their straps, show their breasts and get more likes, subscribers, etc., to attract men’s attention. This reduces men’s intelligence and leads to them being sold something later on. Today, the best minds in the world are working at Google and Facebook to solve this particular problem. It is already happening now, not in the future. As a blogger, I understand that sometimes I need to create low-quality content to be interesting for YouTube. I hope that in the future I will have the opportunity to press the ‘develop’ button on a social network, and artificial intelligence, using information about me, will help me grow and develop.”
David Yang: “I just asked Morpheus for specific examples of how artificial intelligence can harm humanity. He talks about examples related to autonomous weapons, uncontrolled algorithmic decision-making, enhanced monitoring, and data manipulation. He adds that it is important to remember that any technology can be used for good or for harm. I ask, “Okay, how can artificial intelligence help fight deepfakes?” He says, “It can be used to detect forgeries by analyzing images for inconsistencies. For example, facial expressions and pixel patterns can be analyzed to detect potential manipulations.” I ask, “Okay, how can artificial intelligence help humanity?” Here he lists a multitude of things that it can do. I ask, “How will this be implemented? Will it help reduce taxes?” He says that people will work less, and this will mean more rest. I say, “Okay, but then people with low qualifications will lose their jobs.” He replies, “Yes, that is true. In the short term, people with low qualifications may be at a disadvantage compared to those who are well educated and qualified. However, in the long term, these same people can use the opportunities of artificial intelligence for self-retraining.” I want to tell you the following: all of this is complete nonsense. In fact, nothing radical will happen except for one thing. Among us, there will be non-biological companions, and I’ve already bet several bottles of expensive champagne with my friends that in five years we will meet in this very restaurant, and there will be at least one guest sitting here with a non-biological dog, whom he will pet and communicate with as if it were his own family member. And in a few years, people with signs saying “Robots Lifes Matter” will appear on the streets, squares, and cities of some countries.” We were talking about bloggers and so on. The same thing happens with art in general. Yesterday, Kate and I went to an exhibition in San Francisco – the first museum of art and artificial intelligence with fantastic works. When we create an image through Majori, who is the artist? I mean, a blogger doesn’t become a great blogger because of his literary abilities, like our colleagues here, but because of the emotional connections he creates with the people who read him. These emotional connections also arise when we talk about the artist Freddie Mercury, who goes on stage in front of an audience of 50,000 – it’s a huge mega-sexual act that causes oxytocin and other mediators to connect the audience to him. Only when artificial intelligence becomes a part of our society with which we want to argue, quarrel, fall in love and part, will serious change begin. I’m sure there will be situations where a council of doctors who have already refused a child will receive help from ChatGPT, which knows millions of different cases and will find the cause. This will happen when we learn to coexist with artificial intelligence, which will save children and adults. We will begin to consider some models and systems as part of society and will consider them our own because they saved our children. Then we will go to the streets and demand that some models and systems receive part of human rights. The line between biological and non-biological will become blurred, each biological entity will have its non-biological opponent, and non-biological entities will have pieces made based on DNA. Ethical and moral principles will be new.
Nikolay Davydov: “Thanks to people like David, the concept of HLAI emerged. They separated AGI and HLAI. HLAI stands for human life AI, and AGI stands for artificial general intelligence. I really hope that an artificial intelligence becomes the president of Earth. God forbid it being humanoid. We humans are very inefficient beings. We have almost lost in evolution. Today, for example, when I got out of bed, I pulled my leg and back at the same time, but imagine an eagle being able to do that when it flies out of its nest. We create tools for ourselves. ChatGPT is just a tool that makes us slightly more efficient and adapted to life. How does ChatGPT work? It is a very large model with several billion parameters. It does not have consciousness, unlike, for example, Morpheus, whom David Yan is trying to build. ChatGPT, at best, has attention, a large transformer model that statistically predicts the next word of a sentence. It has seen everything that people have written on the internet, except for what has been removed by moderators. You give it a request in English, and it suggests the next word, taking into account the context. It works based on statistical data, does not possess information, and simply suggests words that might fit in the given context. Many YCombinator startups are working on this now. I think it is too early to talk about HLAI and AI right now. I look at all technologies as before AGI and after AGI because everything will be different after AGI. We cannot predict exactly how, but the question is not whether it will be fusion or AGI, but that everything will change after AGI.” To be continued…