GPT Party 3.0, the largest Russian-language networking event dedicated to artificial intelligence, took place in Silicon Valley on March 9-10. More than 450 people gathered at Plug and Play to meet leading experts, entrepreneurs, and investors, discuss the latest trends in artificial intelligence, and gain practical knowledge.
At GPT Party, Andrew Gray, founder of HyperC, an AI-powered Amazon trading company, spoke on the topic “Old AI vs. New AI: Boosting profits and hiring bots with two practical cases.” Andrew explained the difference between classical and modern artificial intelligence, and shared cases of using AI in his projects.
My main business now is investing in products using artificial intelligence. For example, we use automated decision-making systems to determine what is worth buying on the Amazon platform. Of course, I can’t help but pay attention to the development of assistants – this is my passion. Therefore, the second part of my presentation will be devoted to the use of modern chatbot technologies to solve business problems in real time. Let’s consider how all this can be assembled and automated.
We will look at specific cases to compare old and new artificial intelligence. Old AI was difficult to use, while modern AI can be applied to individual cases with minimal effort, although with its own characteristics. I will demonstrate this with two examples. The first example is the use of AI for product arbitrage on Amazon. The second example is a pipeline case using several chatbots passing information to each other to create content.
Let’s start by looking at classic AI using the Amazon speculation model as an example. Amazon arbitrage is a strategy where we buy products from a distributor for $10, sell them under our own brand for $100, and make money on the difference, in this case $90. It seems simple until we run into a situation where the products may have already been sold on Amazon for less than we bought them for, and we risk losing money. This is where artificial intelligence comes in.
AI allows us to speculate on products by predicting when they will cost $10 and make a profit. We have recently moved on to developing a more advanced model called Moonshot. This model predicts profit immediately after investing money and automatically reinvests it.
This is applicable not only to Amazon, but to any type of business where decisions can be driven by investing in certain assets or trading in the retail sector. AI maximizes our profits by determining where to invest our money now. It analyzes the current market conditions and tells us how much of a product we need to order today and how much profit it will bring.
To do this, we simply create a table that shows our previous transactions and the quantity of a product ordered for the current month, and the price for today and yesterday. Artificial intelligence analyzes this table and predicts profits. And this whole procedure becomes very simple with ChatGPT 4. We can simply ask it to create a model in Python and then get a profit forecast based on the data in the table. Such tasks are solved in a standard way thanks to artificial intelligence, and this is what everyone who wants to be competitive in the market should do.
Let’s talk a little bit about the real-world experience of implementing a model we called Moonshot. Our idea was to take 1000+ columns of data, Gradient Boosting 2000+ rounds, 1+ trillion records sparse to 100GB, 2 months of implementation, 600GB of memory, 10TB of 20+GB/s disks, GPU only, API with 100GB+ processing. However, after two weeks, we realized that we were wrong by a factor of 10. We learned that to build a model that outperforms everything that exists in the current market, you need a lot of data. From this, I learned that you should start with something simple, understandable, that it brings value, and then gradually complicate the process.
There is almost no artificial intelligence in the retail sector right now. Our model allows us to collect data from a huge catalog of about a million products. For example, we can analyze indexes by interests and predict the profitability of products on Amazon. It already works, and the simplest product allows you to earn from 0.3% to 1% of the invested amount per week. We plan to develop this idea further and create an exchange based on retail trade, similar to the stock exchange. However, at the moment, chatbots cannot process such a large number of products to predict the results.
Let’s move on to another use case that ChatGPT already does well, which is working with content in a pipeline mode. I am a big fan of this idea and I see that we will soon have 1000 artificial employees working. Managing them, training them, and working in a pipeline mode will be a must. One of the use cases we are considering is creating diverse content, for example, from transcripts of interviews or speeches.
The idea is to use several ChatGPTs in a pipeline mode. When the text of a transcript is published online, it is picked up by one robot, which processes it and passes on the task of creating content for a certain format and direction to other robots. The finished content is then sent to specific people for verification, but the bulk of the content creation work is done automatically.
To create such a mechanism, we need robots that can communicate with each other, and we have developed a technology that allows several ChatGPTs to communicate in a common chat. They must be able to remember the context and interact with non-artificial employees. In addition, we need a technical bot that collects text fragments from the website and summarizes them. After completing the task, it transmits the result to the general chat, where the following bots carry out further instructions and forward the messages to humans for final verification and publication.
During active testing, we identified several key issues that arise when working with the conveyor mode and that we need to solve.
- Forgetfulness. When there are a lot of tasks, bots may forget about some of them. This can be solved by breaking tasks into small components.
- Lost messages. The solution is to ask again if the task is not completed. It is also important to teach managers how to effectively communicate with bots and monitor the execution of tasks.
- Exceptions that occur in debugged processes. In such cases, it is important to explain to bots what to do in such a case, and managers should also know how to communicate with bots in such a case.
- Self-activity. Bots may start acting on their own or refuse to perform tasks. To solve this problem, it is necessary to clearly divide tasks and responsibilities between bots, and monitor their actions. There are also such problems as bot hallucinations and the amount of time required to explain a large task to a bot.
Constantly updating algorithms is also an important aspect of working with bots. It is necessary to constantly monitor and test the system to be sure of its reliability.
We have developed a system for managing the employee conveyor, which is in test mode. This will help us manage the process more effectively and solve emerging problems.