gpt party 2.0. the future of people with AI
The GPT Party 2.0, the most extensive Russian-speaking networking event dedicated to artificial intelligence, took place in Silicon Valley on October 7-8. More than 300 people gathered at Plug and Play to meet with leading experts, entrepreneurs, and investors to discuss the latest trends in artificial intelligence and gain practical knowledge.

In panel discussions focused on the future of humanity with the advent of artificial intelligence, we explored how AI will change people's lives, the existing existential risks, and what the possible future holds.

The panel discussions featured speakers like Nikolai Oreshkin, Andrew Gree, Alex Krol, Nick Spirin, Daniel Kravtsov, David Yang, and Alexander Soroka.




Alex Krol commented, "Currently, those who write and think about AI are divided into two categories: pessimists, who believe a dystopia is imminent, and optimists, who think a dystopia will happen but later and not for everyone. The way we fill our 24 hours defines our civilization. Most of the active population spends a significant part of their lives at work, not with family. The main threat from AI is that it will eventually take away many people's work, which is not only a livelihood but also a sense of purpose for most. This will lead to significant social structural changes. If a large number of people suddenly lose their sense of purpose, numerous situations and problems will arise. Briefly, that's my view."





Nikolai Oreshkin stated, "I see three trajectories for humanity's development. The first is pessimistic, as Alex mentioned, a collapse where everything ultimately ends. The second is a plateau, where at some point we reach a certain knowledge and continue to exist with a different set of parameters. The third, optimistic one, is transcendence, where we start to become the best version of ourselves. AI technologies will give us a push in a way we can't even imagine now. These are the three main directions we're heading, and I sincerely hope for the optimistic one."



Andrew Gree added, "I want to present an optimist's viewpoint, as I'm actively working in this direction. I believe we'll reach transcendence in less than 10 years. Society is already one foot into post-singularity, and transformations will begin in the next 2-3 years. LLM and ChatGPT, in particular, are human models; if we bring them together into a community, we can conduct experiments that were previously impossible on human society. This will be a huge push for understanding what it means to be human and humanity based on these models. We'll draw conclusions as a society much faster about who we should become in post-singularity."

Nikolai Spirin remarked, "Predicting the future is a thankless task. A year ago, ChatGPT didn't exist, and we don't know what will happen in 2-3 years; it's a very long planning horizon. So I'll try to look at it more pragmatically and say what I, as an expert in the field, can vouch for in the next 6-12 months. I believe in hybrid intelligence. It's a system that combines human and artificial intelligence to symbiotically solve a specific business process. Yes, most tasks can be automated with AI, but there will always be a subset of cases that AI cannot handle. In business, there are always complex cases with a very high, critical component that you can't trust to AI. But with hybrid intelligence, you can achieve the quality required by business."


Daniil Kravtsov concluded, "I'll approach the question differently and not talk about what our future will objectively be, but how an individual should relate to these changes. Most people have always believed in apocalyptic myths: judgment day, nuclear war, etc. Most people are conservatives; everything new is dangerous and scary for them. This is normal, and such people maintain stability in society. If suddenly everyone starts innovating, society will collapse. But some people are not afraid and experiment. Choosing one's position here is a subjective matter. Personally, I've decided not to worry about these existential risks and believe that everything will be fine and prepare for the best."



David Yan stated, "AI and solutions related to LLM (Large Language Models) will definitely be able to replace humans in junior positions. Currently, there are about 1 billion people in the world who are considered knowledge workers. In our International Association of Digital Employees, there are about 300 million digital workers. We believe that these 300 million jobs won't be replaced but will be added to the 1 billion. So, in a few years, the 1 billion human knowledge workers will be joined by an additional 300 million nonhuman workers.

Our association's position, and my personal view, is that senior positions will remain with humans. Then the question arises: how can people reach senior positions without experience in junior roles? We see it this way: people will initially aim for senior positions, but they will have a period of internship together with nonhuman workers, teaching each other, and then humans will start assigning tasks to digital employees."




Alexander Soroka added, "Speaking of existential risks, I'd like to note that, as of today, artificial intelligence does not possess its own consciousness. Currently, it's just a tool, but people always fear new technologies because they can bring both progress and destruction. The threat lies in whose hands this tool falls into. The main danger is that these tools can influence mass consciousness. Having such a button in the future, which can answer all of a person's questions, could allow for the control of an entire society."


Malikspace Corporation.
541 Jefferson Avenue
Redwood City, CA 94063
Email: a@murs.ai