AI programs like ChatGPT created by OpenAI, are being used and discussed widely these days, and we have seen some attempts here to use such programs to try and help develop understanding about advanced energy technology and the physics behind it. In addition to ChatGPT, Microsoft and Google have their own AI systems which are being incorporated into their search products.
The whole field has become extremely controversial, with a leading group of artificial intelligence experts co-signing an open letter urging that researchers pause for six months in developing even more powerful AI systems.
An excerpt from the letter:
AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.
AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.
Just this weekend, the Italy’s privacy regulation agency has blocked ChatGPT because of alleged privacy violations. Anyone wishing to use ChatGPT needs to provide personal information such as a valid phone number to use it, and there has been an acknowledge data breach of the system already.
Where this will all end up is unknown. People point to the benefits that can come from such AI systems due to problem-solving capabilities, but also to the possible dangers and potential for abuse. It is obviously going to be a widely discussed topic going forward.