The 30-Minute Guide to Your First AI Application: Dify.ai + Ollama/chatGPT on Mac

How to create a functional documentation assistant without writing code Introduction Large Language Models (LLMs) have transformed how we build intelligent applications, but implementing production-ready AI systems often requires navigating complex infrastructure, managing model deployments, and building custom interfaces. Many developers face a challenging choice: use simple but limiting no-code platforms that hide the complexity, […]

Dynamic Knowledge Graphs: A Next Step For Data Representation?

Integrating temporal data into static knowledge graphs Introduction Knowledge graphs (KGs) have proven to be an effective method of data representation that is increasingly popular. In KGs, entities and concepts are represented as nodes, while the relationships between nodes are depicted as edges. Thus, KGs can effectively capture the semantic meanings of nodes. For instance, Google’s […]

Tree of Thoughts: A New Way to Unlock Problem-Solving in Large Language Models

System 1 vs. System 2: Bringing Deliberate Thinking to AI Author’s Note: This article summarizes research from “Tree of Thoughts: Deliberate Problem Solving with Large Language Models” by Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik Narasimhan from Princeton University and Google DeepMind. The original paper was presented […]

Generated Knowledge Prompting: Enhancing LLM Responses with Self-Generated Context

Created using Gemini 2.5 Pro on 17 April 2025 for the Prompt “Generated Knowledge Prompting: Enhancing LLM Responses with Self-Generated Context” Introduction Generated Knowledge Prompting is a prompt engineering technique designed to enhance the performance of large language models (LLMs) by leveraging their ability to generate relevant knowledge dynamically. By first generating useful knowledge related […]

The Evolution of Research: How Persistent Identifiers Are Transforming the Scholarly Landscape

Museum Front-Face Visualization. This image is sourced from ChatGPT’s DALL·E image generation. Introduction Imagine a world where every piece of research—every article, dataset, researcher, and institution—is seamlessly connected, no matter where it resides or how the digital landscape shifts. This isn’t a distant dream; it’s the reality being forged by persistent identifiers (PIDs). These unassuming […]

Understanding the PID Graph: Building Connected Research Infrastructure for the Digital Age

How Persistent Identifiers Are Creating a Navigable Map of Knowledge This blog post is based on and summarises the paper “Connected Research: The Potential of the PID Graph” by Helena Cousijn, Ricarda Braukmann, Martin Fenner, Christine Ferguson, René van Horik, Rachael Lammey, Alice Meadows, and Simon Lambert, published in 2021. The original paper introduced the […]

Active Prompting with Chain-of-Thought: Revolutionizing Reasoning in Large Language Models

Superwise.ai. (n.d.). Active prompting diagram [Illustration]. In Making sense of prompt engineering. Superwise.ai Blog. https://superwise.ai/blog/making-sense-of-prompt-engineering/ Introduction Large language models (LLMs) like ChatGPT have transformed how we use artificial intelligence, excelling at tasks like writing essays, answering questions, and even holding conversations. But when it comes to complex reasoning—think solving math problems, tackling commonsense puzzles, or […]

Meta Prompting for AI Systems

A New Way to Teach AI How to Think, Not Just Respond This article is based on the paper Meta Prompting for AI Systems, published by Tsinghua University and Shanghai Qi Zhi Institute. Introduction The rapid advancement of Large Language Models (LLMs) has revolutionised artificial intelligence, enabling powerful text understanding and generation capabilities. However, despite […]

Program of Thoughts 

The Evolution of Numerical Reasoning in AI: The Power of Program of Thoughts (PoT) This article is based on the paper Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks by Wenhu Chen and others. Author Introduction Artificial Intelligence has made significant strides in solving complex numerical reasoning tasks. Traditional approaches focused […]

Understanding In-Context Learning

A Comprehensive Survey This article explores and analyses the paper “A Survey on In-Context Learning” by Dong et al. (2024) In-Context Learning Visualization. This image is sourced from Eeswar Chamarthi on [Linkedin] Author Introduction In the rapidly evolving field of natural language processing (NLP), In-Context Learning (ICL) has emerged as a transformative capability of large […]