In 2016, Google engineer Illia Polosukhin, frustrated by the lack of progress in AI, collaborated with colleagues to develop the transformer model, introduced in the seminal 2017 paper “Attention Is All You Need.” This breakthrough revolutionized artificial intelligence, but Polosukhin is now concerned about the secretive nature of large language models (LLMs) and the potential dangers they pose. He criticizes companies, including Meta, for not being truly open source, as crucial training data and biases remain undisclosed. Polosukhin fears that profit-driven motives will lead to more manipulative AI models and believes that current regulatory efforts are inadequate due to the complexity of the technology and potential regulatory capture by large corporations. As an alternative, he advocates for an open source model of AI with built-in accountability, drawing on his experience with blockchain and Web3 at the Near Foundation. This decentralized approach would allow collective ownership and neutral platform development, aligning incentives away from profit maximization. Near is already fostering this vision, supporting developers and startups through an incubation program aimed at creating equitable and transparent AI applications, including systems for distributing micropayments to content creators.

The Future of AI – Transparency, Regulation, and the Open Source Solution
Polosukhin advocates for a decentralized, open source AI model to ensure transparency and accountability.
1–2 minutes










