Carbon credits hold significant potential to support climate mitigation efforts by providing financial incentives for emissions reductions and carbon sequestration. However, the current carbon credit ecosystem faces critical challenges that undermine its integrity, transparency, and trustworthiness. A primary issue is double counting, where the same carbon credit is claimed by multiple parties, leading to unreliable emissions reporting and inflated climate mitigation claims. This issue is exacerbated by the absence of robust frameworks for carbon credit provenance, including unclear ownership histories and ambiguous project documentation. Consequently, it is challenging to verify and ascertain whether a credit is emanating from a genuine emissions reduction activity, or is previously unused, or accurately represents its claimed environmental benefit. These problems are aggravated by manual and inconsistent reporting processes, which introduce inefficiencies and human- induced errors into the evaluation of carbon offset projects. Without intelligent, reliable and standardised mechanisms for automatically assessing project impact, the credibility of carbon markets is severely compromised.
Emerging technologies offer promising solutions to these longstanding issues. Artificial intelligence and advanced sensing technologies now enable high-frequency, real-time tracking of carbon-related activities. These tools could be used to intelligently detect land-use changes, estimate carbon sequestration, and monitor emissions with a level of precision and scalability that manual inspections cannot achieve. When combined with blockchain infrastructure, this data can be securely recorded and time-stamped, providing an immutable audit trail for each carbon credit.
This tutorial systematically explores how blockchain technologies and AI can be leveraged to engineer more trustworthy systems that support the transition to achieve Net-Zero emissions. We focus on two high-impact sectors, namely agriculture and transportation, to illustrate current applications, implementation challenges and emerging best practices.
Finally, the tutorial concludes by outlining future research directions, including the deployment of autonomous agents for intelligent and continuous verification, the integration of smart contracts for automating carbon credit issuance, and the concept of carbon tokens and fractionalisation. Collectively, these innovations offer a path towards restoring the credibility and impact of the carbon markets, while opening new avenues for research, government policy and research translation.
Dr Asma Alkhalaf: A leading researcher specializing in engineering research-inspired solutions that leverage emerging technologies to advance environmental sustainability. With a PhD in Distributed and Cloud Computing, her work bridges environmental sustainability with emerging digital technologies such as blockchain and AI.
Asma Mistadi: A researcher with a strong commitment to the use of technology for environmental sustainability, with a focused interest in the tokenisation and fractionalisation of carbon credits. Her PhD research combines Artificial Intelligence and Blockchain to develop intelligent solutions for carbon credit ownership provenance.
Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) are rapidly transforming how businesses interact with and utilise information. While LLMs, such as GPT and Claude, have achieved unprecedented performance in natural language understanding and generation, they often exhibit limitations, including hallucination, outdated knowledge, and a lack of interpretability. RAG addresses these challenges by combining LLMs with external information retrieval mechanisms, enabling systems to generate responses grounded in relevant, real-time data. This hybrid architecture follows a retrieve-then-generate paradigm, retrieving contextually relevant documents from a knowledge base and then using them to guide the generation of more accurate and trustworthy responses.
This tutorial offers a hands-on introduction to integrating LLMs and RAG into practical business workflows. Participants will explore the foundational concepts of tokenisation, embedding-based retrieval, vector databases, and prompt engineering. Emphasis is placed on applying RAG to knowledge-intensive applications such as customer service chatbots, intelligent document summarisation, and dynamic content generation. By incorporating up-to- date and domain-specific information into the generation process, RAG allows LLMs to deliver more reliable outputs for enterprise use cases.
Live demonstrations will guide attendees through building an end-to-end intelligent chatbot using LLMs, LangChain, and Streamlit, showcasing how RAG can be deployed with minimal infrastructure using cloud-based environments such as Google Colab. Attendees will receive codebases and implementation templates to replicate and customise in their workflows.
The session concludes with a discussion of implementation challenges, including latency, retrieval accuracy, and the ethical risks associated with automated decision-making. We also explore future research directions such as adaptive retrieval agents, integration with multimodal inputs, and responsible fine-tuning techniques to ensure fairness, transparency, and accountability.
Collectively, this tutorial equips participants with the practical tools and theoretical insights to harness LLMs and RAG for intelligent automation, enhancing the quality and efficiency of digital decision-making in modern enterprises.
Charles Liu: A researcher at the Australian Artificial Intelligence Institute (AAII), University of Technology Sydney, focusing on intelligent computing and systems engineering with applications in smart agriculture, carbon intelligence, and renewable energy. He proposed the KACINO system for digitising carbon-neutral assets.
Imani Abayakoon: A PhD student at the University of Technology Sydney specialising in Large Language Models (LLMs), Retrieval-Augmented Generation (RAG), and Explainable Artificial Intelligence (XAI), with research dedicated to advancing AI systems' accuracy, reliability, and interpretability.