The rapid advancement of artificial intelligence, particularly generative and large language model AI, has sparked concerns about its reliability and potential dangers. Despite its potential, AI is prone to providing incorrect information, and its tendency to convey false data can have serious consequences. The author argues that AI is being implemented too quickly, driven by the potential to cut labor costs, and that the technology is flawed, requiring significant improvement before it can be used responsibly.
The problems with AI stem from its inability to evaluate the credibility of its training data, leading to the dissemination of incorrect information. Additionally, AI lacks the human factor, making it incapable of physically testing products, which is crucial for accurate evaluations. The author suggests that AI needs to be “broken” further before it can become truly useful, highlighting the need for legal and cultural guardrails to regulate its implementation. The article proposes legal solutions, such as regulating AI training datasets and compelling AI firms to use credible databases, as well as non-legal measures, like “poisoning” AI learning pools, to prevent its misuse.











