We’ve become so used to AI improving, particularly in 2023. However, the rise of LLMs has intensified the challenges and discussions surrounding data. In this guest essay, Alastair Moore, founding partner at The Building Blocks and long-time EV member, asks us to think more seriously about how we might govern and control AI systems as they continue to improve in performance.
The discourse around data and its societal implications isn’t new. Thinkers like Jaron Lanier have long underscored the rights of users over the data they create. Shoshana Zuboff delved deep into the machinations of surveillance capitalism, and Glen Weyl helped introduce new ways of thinking about data ownership. With the current evolution of AI, their insights have become even more pertinent as we navigate the intersection of technology, data, and society.
🙌 Thank Alastair for sharing his exponential view with us by forwarding this email and sharing it with your network. If you want to discuss the ideas further with Alastair, use the comment button.
Share
Leave a comment
Best,
Azeem
Note: Guest posts represent the opinions of the author and not mine.
By Alastair Moore
SBF is undergoing a highly publicised fraud trial that has seen former buddies turning against each other. At the same time, Polkadot, a large multi-chain blockchain platform, is reportedly laying off hundreds of employees as it reevaluates its focus.
In the midst of this backdrop, I am going to present a controversial view: the era of Web3 is approaching. Web3 technologies, with their decentralised and transparent nature, offer a potential path to address some of the key challenges of AI as it scales across our technology systems. In this article, I will focus on two challenges and potential solutions: (1) the relationship with creators whose information and data are used to train AI systems, and (2) the alignment of multi-agent systems.
I will provide examples that demonstrate how the integration of AI with Web3 presents new tools that can achieve better control, accountability, and trust in AI systems. These tools can play a role in ensuring responsible use of AI.
The issue of data ownership is complex: how should we attribute ownership and contribution to data that trains AI systems? This has been a longstanding problem, and is becoming increasingly difficult to solve.
For the future of the creative landscape where human and AI-generated media co-exist, I believe it is essential for creators to be recognised, attributed, and compensated — to receive apportioned credit, including royalty payments — when their work is used as input for AI systems.
So, how do we do it?
Recent research from Professor John Collomosse at Surrey University and Adobe introduced EKILA,1 a decentralised framework that enables creatives to receive recognition and reward for their contributions to GenAI-derived media. EKILA proposes a robust2 visual attribution technique, and combines this with an emerging content provenance standard (the Coalition for Content Provenance and Authenticity C2PA) to address a major problem of synthetic media – and help determine the generative model and training data responsible for AI-generated imagery. EKILA also extends the non-fungible token (NFT) ecosystem, by introducing a tokenised representation for rights which enables a triangular relationship between the asset’s Ownership, Rights, and Attribution (ORA).
Author: Jerry Lopez
Last Updated: 1698540841
Views: 1171
Rating: 4.3 / 5 (71 voted)
Reviews: 88% of readers found this page helpful
Name: Jerry Lopez
Birthday: 1993-12-05
Address: PSC 7412, Box 5871, APO AA 59327
Phone: +3586989028861775
Job: Article Writer
Hobby: Skateboarding, Running, Quilting, Beekeeping, Running, Hiking, Animation
Introduction: My name is Jerry Lopez, I am a courageous, variegated, enterprising, Colorful, brilliant, artistic, rich person who loves writing and wants to share my knowledge and understanding with you.