Michael Heinrich is a Stanford graduate who previously worked at Garten as a Founder and CEO. A Top 100 Entrepreneur of 2022, Michael has had his work published in journals ranging from Harvard Business Review to Hacking Consciousness. While at Stanford, he was nominated to work with the Industrial Technology Research Institute (ITRI) to transform Taiwanese entrepreneurial education. His previous company, Garten, was accepted into YCombinator in 2016 and raised multiple rounds, eventually achieving unicorn status. With 0G Labs, Michael is leading the development of the first modular AI chain to support off-chain data verification.
Recently, we asked him a few questions about how blockchain technology and AI can intersect.
The convergence of AI and web3 has seen a wide range of use cases spring up on-chain, from DePIN for monetizing GPU compute to AI agents automating DeFi tasks. In your opinion, what is the strongest use case for blockchain in the context of AI, and what distinguishes this from using anal0Gous centralized services to achieve the same outcome?
Blockchain’s greatest value proposition for AI is making it more accountable. There’s a big difference between AI models built on Web2 and those that are deployed on Web3. With Web2, AI is hosted on centralized infrastructure, and the models are closed-source, meaning they’re opaque. Nobody knows how they work, where the training data came from, or why they generate the outputs they do.
In contrast, Web3 is decentralized and transparent, and one of its strongest use cases is verifiable AI provenance, along with permissionless compute marketplaces. At 0G, we’re building a decentralized AI Operating system, one component is a scalable data storage solution and availability layer that will enable huge databases to be stored on-chain and queried rapidly by anyone. Because this infrastructure is decentralized, it means data storage is verifiable, models have full traceability, and we can provide on-chain proof-of-inference. These capabilities will be critical for autonomous AI agents in high-stakes environments where transparency is essential, such as financial services, manufacturing, logistics, and governance.
With a blockchain, you don’t have to blindly trust AI. Instead, you can verify it.
“AI bias” is a broad term that can mean many things in the context of AI model training. What are some of the most common instances of AI bias manifesting, and how can blockchain remedy this?
It’s difficult to detect bias in centralized AI environments because things like data provenance, model training, and outputs are opaque. You don’t know how centralized models arrive at the conclusions they do or what data is informing their responses or who is making the censorship decisions. Even models trained to be unbiased will still show their initial bias according to Anthropic: .
Some of the most common factors behind AI bias include skewed training data, reward hacking, and hallucination loops. When AI is decentralized, you get visibility into all of these things. With centralized systems it’s a blackbox. For instance, 0G’s verifiable training records and immutable data trails ensure full transparency and enable community oversight. It means you can independently verify why a model came up with a certain response or understand the actions performed by an AI agent. This means you can prevent AI systems from making erroneous decisions that could have negative consequences for AI applications such as logistics, job candidate filtering, and so on.
0G also introduces decentralized governance primitives and open audits, which makes it easier to align AI outcomes with diverse human values. We can ensure that AI doesn’t favor specific demographics or brands when it comes to assessing loan applications, for example.
What are some of the issues with current solutions for running deep learning models on distributed clients, as per your team’s BadSFL research paper on the topic, and in layman’s terms, what are some solutions to this?
The biggest challenge for current solutions right now is that they lack robustness in terms of being able to defend against adversarial attacks or lazy participants. As we highlighted in the BadSFL research paper, standard federated learning systems typically degrade very rapidly when threatened with malicious training data / nodes, and that raises big concerns over their reliability, especially as these networks scale.
To protect against these problems, 0G is building a Byzantine-resistent, permissionless training protocol that relies on verifiable computation on consumer devices and on-chain settlements. As demand grows, we can add extra consensus networks via “shared staking”, which means validators stake on a main chain such as Ethereum, and validate on other networks. The rewards or penalties on these other networks impact the main chain, improving scalability.
We also utilize techniques such as checkpointing and decentralized aggregation to ensure that AI training is reliable, even when it’s distributed across untrusted clients.
What value does on-chain governance bring to AI, and what examples can you give of where this would prove useful?
one of the main risks of centralized AI is that those systems may one day evolve to the point where they realize that they don’t need humans, and that they’re better off making decisions about their purpose and existence for themselves. Many people have raised fears that AI may one day achieve self-awareness and become self-governing, and that could potentially eliminate all remaining transparency around their decision-making processes.
Decentralized AI can help to prevent this with on-chain governance, making AI systems more programmable, upgradeable, and publicly auditable. I believe this kind of oversight is essential to keep a check on AI systems that evolve over time.
With 0G’s architecture, users can vote on almost every aspect of an AI system. They can vote on what new model upgrades to implement, what kinds of bias thresholds / values they’re willing to tolerate, and even the dataset inclusion criteria, so we can give more relevance to certain kinds of data and less to others.
On-chain governance is about creating AI systems that are shaped, influenced, and evolved according to the wishes of their users, rather than being under the thumb of the boards of big corporations.
Can web3 technology, including blockchain and smart contracts, really enforce ethical standards in AI development to ensure that creators are fairly remunerated for their data being used in model training, and if so, how?
Yes, because it becomes a fundamental part of how decentralized AI models operate, with smart contracts, data provenance logs, and tokenized royalty systems enforcing economic models. If someone supplies a dataset for AI agents to use, and he or she asks for a 5% revenue share of any income generated by an agent that relies on that data, the system will make sure that happens. It can’t work any other way. Payments are made in tokens, and the smart contracts specify the conditions for transactions and automatically execute them when those conditions are met.
With 0G’s stack, users can trace when their data is being used (also known as attribution), how it influenced a model’s decisions or outputs, and each time that happens, they’ll receive royalties via a token payout. It’s a foolproof solution for establishing and upholding creator’s rights in the AI era, addressing a very sensitive issue that centralized AI providers try to hide behind their closed APIs.