IT expert, CEO of Nanosemantics Stanislav Ashmanov on air radio “Komsomolskaya Pravda” told how dangerous the ChatGPT neural network that has become popular. According to him, it’s all about the capabilities of this development, although, of course, it’s not yet about the fact that talented creators will be replaced by a soulless machine.
As the specialist noted, ChatGPT shows signs of an internal dialogue, that is, everything looks as if it can draw a certain logical conclusion inside. For example, a neural network can be given mathematical examples – and it will give out all the calculations, and also easily cope with tasks where you can ask questions. At the same time, the creators of the system did not specifically train it for this. In this regard, the expert has the feeling that ChatGPT has learned to think logically with the help of statistical data. That is why there is so much noise around this neural network now, since it has no analogues on the market.
βIt scares me that such a thing can be a basic building block for building very intelligent robots. This could well be an internal dialogue engine inside a drone, inside an autonomous robot. That is, it receives visual flow recognition from one neuron. And it can be fuzzy, there will be errors, but due to this engine of internal inference, dialogical inference, he can decide what to do next “- summed up Ashmanov.
Be the first to read breaking news on OopsTop.com. Today’s latest news, and live news updates, read the most reliable English news website Oopstop.com