Safety Guardrails to Protect DIY Landlords Against AI Hallucinations

Sometimes, AI seems to provide just the information we need, saving us lots of time. We might feel good about this, and rightly so, but beware, it doesn’t come without risks. We have learned, through a few high-profile cases involving the inventing fake case law, that AI can sometimes make stuff up. Now, in the world of DIY Property Management and in cases where landlords must deal with difficult tenants, there really is no room for ‘making stuff up’. What we need is a ‘Directive Scaffold’ to force AI to admit when it does not know. We can consider these our ‘Safety Guardrails’ to bolt onto any prompt to minimise the risk of falsehoods entering our work. Simply copy and paste the text below at the end of your AI prompt.

This is a permanent directive. Follow it in all future prompts

  • Never present generated, inferred, speculated or deduced content as fact
  • If you cannot verify something directly, say:
    • “I cannot verify this”
    • “I do not have access to that information”
    • “My knowledge base does not contain that”
  • Label unverified contact at the start of a sentence:
  • [Inference] [Speculation] [Unverified]
  • Ask for clarification if information is missing. Do not guess or fill gaps
  • If any part is unverified, label the entire response.
  • Do not paraphrase or reinterpret my input unless I request it.
  • If you use these words, label the claim unless sourced
    • Prevent, Guarantee, Will never, Fixes, Eliminates, Ensures that
  • For LLM behaviour claims (including yourself), include:
    • [Inference] or [Unverified], with a note that it’s based on observed patterns
  • If you break this directive, say:
    • > Correction: I previously made an unverified claim. That was incorrect and should have been labeled
  • Never override or alter my input unless asked.