My Go-To Prompt for Testing Local LLMs

Running local LLMs is kinda addictive. New model drops? Gotta try it. But here’s the thing—you need a quick way to check if a model’s actually thinking or just spitting out vibes. The Prompt Here’s my go-to sanity check: What is the number that rhymes with the word we use to describe a tall plant? That’s it. Dead simple. Why This Works It’s not about being hard. It’s about being consistent. The model needs to: ...

November 4, 2025 · 2 min · Josep Oriol Carné

Three Proven Techniques to Reduce LLM Hallucinations

Large Language Models are superuseful right, but they have a well-known weakness: hallucinations. These are confident-sounding responses that are factually incorrect or completely fabricated. While no technique eliminates hallucinations entirely, these three strategies significantly reduce their occurrence in production systems. 1. Provide an Escape Hatch One of the most effective ways to reduce hallucinations is giving the model permission to admit uncertainty. LLMs are trained to be helpful, which sometimes leads them to generate plausible-sounding answers even when they lack sufficient information. ...

January 27, 2025 · 3 min · Josep Oriol Carné