Tree of Thoughts: A New Way to Unlock Problem-Solving in Large Language Models

System 1 vs. System 2: Bringing Deliberate Thinking to AI Author’s Note: This article summarizes research from “Tree of Thoughts: Deliberate Problem Solving with Large Language Models” by Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik Narasimhan from Princeton University and Google DeepMind. The original paper was presented […]
Active Prompting with Chain-of-Thought: Revolutionizing Reasoning in Large Language Models

Superwise.ai. (n.d.). Active prompting diagram [Illustration]. In Making sense of prompt engineering. Superwise.ai Blog. https://superwise.ai/blog/making-sense-of-prompt-engineering/ Introduction Large language models (LLMs) like ChatGPT have transformed how we use artificial intelligence, excelling at tasks like writing essays, answering questions, and even holding conversations. But when it comes to complex reasoning—think solving math problems, tackling commonsense puzzles, or […]
Understanding In-Context Learning

A Comprehensive Survey This article explores and analyses the paper “A Survey on In-Context Learning” by Dong et al. (2024) In-Context Learning Visualization. This image is sourced from Eeswar Chamarthi on [Linkedin] Author Introduction In the rapidly evolving field of natural language processing (NLP), In-Context Learning (ICL) has emerged as a transformative capability of large […]
From Hallucination to Verification: Making AI Responses More Trustworthy

How Chain-of-Verification Creates More Reliable AI Systems This article explores and analyses the paper ‘Chain-of-Verification Reduces Hallucination in Large Language Models‘ by Shehzaad Dhuliawala and colleagues from Meta AI & ETH Zürich. The paper was published on arXiv in September 2023. Author Introduction As Large Language Models (LLMs) become increasingly integrated into our digital infrastructure, […]