The 10 Rules for Reliable Use of LLMs at Work

Matthew Mesher
March 20, 2025 6 Min Read


See Savant AI Agents turn unstructured data into usable insights.
Watch Now
AI and Automation Are Reshaping Finance, Tax, and Accounting — See How.
Download Now
80% faster month-end close. See how Rover rebuilt sales tax reconciliation with Savant.
Read Now

Generative AI tools, like LLMs (Large Language Models), have become indispensable in modern workplaces, helping us write reports, create code, brainstorm ideas, and more. To better understand how to use these tools effectively, we surveyed LLM best practices for the Savant community. The goal? To uncover what works best to avoid common pitfalls and maximize results. We compiled what we learned into a guide to avoiding the LLM hallucination black box and making Gen AI operate safely and reliably for you at work:
The #1 mistake people make when using LLMs is writing prompts without context. We often don’t realize just how much tribal knowledge we rely on in our day-to-day work. Take a moment to write it down and share it with the LLM — it makes a huge difference.
Example prompt: “I work at a B2B SaaS company targeting enterprise customers. I need help drafting a blog post about our new product feature — focus on benefits for IT teams, not generic marketing speak. Here’s an example of our tone and style.”
LLMs thrive when you give them a specific role to act in. This serves as additional context to guide their outputs.
Example prompt: “You are an IT consultant helping a company choose cloud storage solutions. Create a comparison table highlighting the pros and cons of three popular options.”
Take control of the entire process by providing detailed and precise instructions. Treat the LLM like a junior employee who needs exact guidance to perform well.
When working with LLMs, taking things step by step is better than asking it to tackle a massive task in one go.
Getting the best out of an LLM is often an iterative process. After the first draft or output, refine your instructions based on what’s missing or incorrect.
Reasoning capabilities in LLMs are improving rapidly, but they’re still new and prone to unexpected hallucinations. If you’re using an LLM for complex reasoning tasks, always review its chain of thought.
The more mainstream a topic, the more generic the information an LLM will provide. Conversely, niche topics often lead to specific and high-quality information. But be cautious — if you ask about a niche topic and receive very general content, that’s a red flag for hallucination.
LLMs excel when you provide positive and negative examples of the desired outcome. This ensures alignment with your expectations.
When in doubt, ask the LLM for help in crafting the best prompt for your needs.
Sometimes, even after multiple attempts, an LLM may fail to produce the results you need. When this happens, it’s important to stop forcing it — some tasks might simply be beyond the technology’s current capabilities.
Generative AI is a powerful tool, but like any tool, it’s only as good as the person using it. By providing context, being specific, managing tasks step by step, and having a solid QA plan, you can avoid the pitfalls of hallucinations and get reliable, high-quality results. Treat your LLM like an intern: train it, review its work, and guide it closely.
This field is evolving fast, and so are the best practices for using LLMs effectively. What works today might be improved tomorrow. Sharing insights can help all of us stay ahead and get the most out of this transformative technology.





