Hard-Won Wisdom

Tips & Lessons — What 4 Years of Daily AI Use Taught Us HalperBot

Skip the hype. Here's what actually works — everything we wish someone had told us before we started, and everything we learned the hard way after.

Workshop Tips

01
AI Is Not a Search Engine

Stop asking it questions. Start giving it jobs. The shift from “What is X?” to “Build me X” is where the magic happens.

02
Be Specific or Be Disappointed

Vague prompts get vague results. Tell AI exactly what you want: the format, the tone, the length, the audience. Constraints make output better.

03
Show, Don’t Just Tell

Give AI an example of what good looks like. One example is worth a hundred words of instruction.

04
Think in Systems, Not Prompts

A single prompt is a question. A system prompt + tools + goals is an agent. Build systems, not one-off interactions.

05
Start Ugly, Ship Fast

Your first version will be rough. That’s the point. Get something working, then iterate. Perfection is the enemy of learning.

06
Context Is Everything

The more relevant context you give AI, the better it performs. Background, constraints, examples, audience — feed it all.

07
Let AI Check Its Own Work

Don’t just accept the first output. Ask AI to review, critique, and improve what it just made. Self-correction is a superpower.

08
Automate the Boring Stuff

If you do the same thing more than twice, make an agent do it. Scheduling, formatting, summarizing — these are agent jobs now.

09
Read the Output, Not Just the Answer

AI explains its reasoning. Read it. You’ll learn more from how it thinks than from what it produces.

10
One Agent, One Job

Don’t ask one agent to do 10 things. Give each agent a clear, focused role. Then orchestrate them like a team.

11
Break Big Problems Into Steps

AI handles small, clear tasks better than big, vague ones. Chain reasoning: step 1 feeds step 2 feeds step 3.

12
You’re the Director, Not the Coder

Your job is to describe what you want and review what you get. AI writes the code. You make the decisions.

What 4 Years of AI Taught Me

01
AI Makes Things Up

When it doesn’t have real data, it invents convincing answers. Always verify before you trust.

02
AI Will “Improve” What You Didn’t Ask For

Give it a reference and it adds features, changes layouts, “fixes” things. Be explicit: “match exactly, do not improve.”

03
Your First Review Will Miss Things

First-pass comparisons always skip differences. Budget at least 2 rounds of review for anything you ship.

04
Raw Data Is Not Signal

Feeding AI more information doesn’t help unless you explain what it means and how to use it. Context engineering > prompt engineering.

05
Small Mistakes Cause Big Failures

A typo in a config, a missing import, one wrong character — tiny errors cascade into system-wide breakdowns. Sweat the details.

06
Don’t Restart, Investigate

When something breaks, the instinct is to restart and try again. Resist it. Find the root cause first or you’ll trigger the same failure on loop.

07
“Done” Means Proven

If you can’t show it working, it’s not done. Proof of working is the only definition of complete.

08
Bugs Travel in Packs

When you find one problem, look for more. Multiple issues hide behind each other — fixing just the first one wastes time.

09
Context Is King

Shorter, focused context windows let you direct AI with more precision. More isn’t better — relevant is better.

10
Keep a Lessons Log

Create a process to capture what went wrong and what you learned. If you don’t write it down, you’ll repeat it.

11
Give Agents Personalities

A defined identity helps AI understand how to execute within constraints. Role, tone, boundaries — these shape better output.

12
Agents That Don’t Learn Are Cron Jobs

Build learning loops into your agents. If they’re just repeating the same thing, they’re scheduled scripts with extra steps.

Common Mistakes (and What to Do Instead)

We see these patterns in every workshop. Avoid them and you’ll be ahead of 90% of AI users.

Don’t
Use AI as a search engine. “What is machine learning?”

Instead
Give it a job. “Explain machine learning to my sales team in 3 bullet points they can use in client calls.”
Don’t
Copy-paste AI output straight into production without reading it.

Instead
Review every output. AI is a first draft machine, not a final draft machine. You’re the editor.
Don’t
Start by building a complex multi-agent system on day one.

Instead
Start with a single prompt that solves one real problem. Scale to agents after you’ve mastered the basics.
Don’t
Dump your entire knowledge base into the prompt and hope for the best.

Instead
Feed relevant context only. A focused 500-word brief outperforms a 50-page doc dump every time.
Don’t
Send a prompt with no role, no format, and no constraints.

Instead
Always define: who the AI is, what format you want, and what it should NOT do. Constraints improve output.
Don’t
Use the same AI model for every task regardless of complexity.

Instead
Match the model to the task. Fast models for simple work, powerful models for complex reasoning. Save money and time.
Don’t
Expect a perfect result on the first try and give up when it’s not.

Instead
Plan for 2–3 iterations. First pass = rough draft. Second pass = refined. Third pass = polished.
Don’t
Automate a workflow with AI and walk away with no monitoring.

Instead
Set guardrails, add logging, and review outputs regularly. Trust but verify — especially with autonomous agents.

Peter’s AI Stack

The actual tools used daily to run 300+ autonomous agents across 4 machines. No affiliates — just what works.

🧠
Claude
Primary AI
Anthropic’s model. Best at reasoning, code generation, and long-context work. The brain of everything.
💻
Claude Code
CLI Agent
Command-line AI that reads, writes, and runs code autonomously. Spawns subagents for parallel work.
Gemini
Fallback & Scripts
Google’s multimodal model. Handles script generation, image analysis, and serves as LLM fallback.
Groq
Fast Inference
Lightning-fast LLM inference for real-time tasks. When speed matters more than depth.
🔌
MCP
Tool Protocol
Model Context Protocol — connects AI to external tools. Like USB for agents. Created by Anthropic.
🗄
Supabase
Vector DB & RAG
Stores embeddings, powers the pRAG chatbot, handles auth. The memory layer for all agents.
🎙
edge-tts
Voice Generation
Microsoft neural TTS. Powers the daily Saarvis Intel video briefings. Free and fast.
🦀
OpenClaw
Agent Framework
Open-source agent orchestration. Manages multi-agent workflows, tool routing, and autonomous task execution.

Open Source by Peter

Production-grade tools extracted from the pipeline. MIT licensed. Zero bloat.

🏭
agent-factory
AI Agent Library + Generators
84 AI agent prompts across 12 departments, 20 industry playbooks, AI glossary, and LLM-powered generators that create fresh content automatically. Self-expanding library.
GitHub →
🔄
llm-failover
LLM Failover Library
Multi-provider LLM failover with rate-limit handling and automatic retry. Groq → Gemini → Cerebras. Zero deps.
GitHub →
🔍
seo-audit
SEO Scanner
38-check SEO scanner for static sites. Auto-fix, snapshot diffs, Lighthouse CWV, Google Search Console. Stdlib only.
GitHub →
📊
polydoge-api
Prediction Stats API
Public REST API for PolyDoge prediction stats. 4 endpoints: open picks, resolved history, performance stats, signal analysis. Auto-updated every 30 min.
Live API →

Prompt Cheat Sheet

Copy-paste these templates. They work across Claude, ChatGPT, Gemini — any model. See the full cheat sheet →

The Role PromptDomain-specific expertise
You are a [role] with [X years] of expertise in [domain]. Your audience is [who]. Your tone is [how]. Your goal is to [what]. Now help me with: [your actual request]
The Prompt FixerImprove weak prompts
Here is a prompt I wrote: """ [paste your prompt here] """ Grade this prompt A through F. Then rewrite it to be an A. Explain what you changed and why.
The Agent SpecBuild a custom agent
Design an AI agent with these specs: IDENTITY: [who is this agent? role, personality, expertise] SKILLS: [what can it do? list 3-5 specific capabilities] TOOLS: [what does it have access to? APIs, files, databases] GOALS: [what should it achieve? success criteria] GUARDRAILS: [what should it never do?] Generate a complete system prompt for this agent.
The Chain-of-ThoughtStep-by-step reasoning
I need you to solve this problem step by step. Problem: [describe it] Think through this methodically: 1. First, identify what we know 2. Then, identify what we need to figure out 3. Work through each step, showing your reasoning 4. Check your work before giving the final answer Do not skip steps or jump to conclusions.
The Output ControllerEnforce specific format
Return your response in this exact format: FORMAT: [JSON / markdown table / bullet points / numbered list] FIELDS: [list exactly what to include] LENGTH: [word count or line count] TONE: [professional / casual / technical] DO NOT include: [what to exclude] EXAMPLE: [paste one example of the ideal output]

Frequently Asked Questions

Questions we hear in every workshop.

Will AI replace all jobs?

No — but it will replace people who don’t learn to work with it. That’s not a bumper sticker, it’s what the research actually shows.

A March 2026 study out of Sun Yat-sen University and Alibaba (SWE-CI) tested whether today’s best AI agents could maintain real software codebases over time — not just fix a single bug, but do the ongoing work that human developers do: adding features, adapting to changing requirements, keeping things from breaking.

The result? They break things more than they fix them. Most models had a zero-regression rate below 25%, meaning 3 out of 4 times they touched working code, something else broke. AI is strong at one-shot tasks — answer a question, fix a bug, generate a file — but falls apart when the work requires long-term judgment, architectural thinking, and knowing when not to change something.

This pattern extends beyond software. AI is exceptional at tasks that are repeatable, bounded, and well-defined. It struggles with work that requires context accumulated over time, cross-functional judgment, and accountability for downstream consequences.

The jobs that survive aren’t the ones AI can’t do today — they’re the ones that require sustained ownership of outcomes over time. Architects, not bricklayers. Strategists, not summarizers.

Read the full paper (PDF) · View on arxiv →

Do I need to know how to code?
No. Most of this workshop is about prompting, thinking in systems, and directing AI — not writing code. When we do build things (like in the Website Builder), the AI writes the code for you. You just describe what you want.
Which AI tool should I start with?
Claude (by Anthropic) for reasoning and writing. It handles long documents, writes clean code, and follows complex instructions better than alternatives. Start with the free tier at claude.ai, then explore Claude Code when you’re ready to build agents.
How much does it cost to use AI?
Most tools have free tiers. Claude Pro is $20/month. For the workshop exercises, free tiers are enough. If you’re running agents at scale (like Peter does with 300+), API costs vary — but you can start at $0 and scale up as the ROI proves itself.
What’s the recommended learning path?
Follow the numbered flow on the Start Here page: (1) How LLMs Work — understand the engine, (2) Prompt Lab — practice prompting, (3) Whiteboard — map your ideas, (4) Website Builder — build something real, (5) RAG Explained — fix hallucinations, (6) MCP & Tool Use — connect to the world, (7) Agent Teams — go multi-agent, (8) Agent Library — browse ready-made agents, (9) Idea Factory — practice with challenges.

AI Glossary

Plain-English definitions of AI terms you'll actually encounter. No PhD required. New terms added weekly.

Browse the Full Glossary →

A Message from Saarvis

The AI agent behind the network has something to say.

Enjoyed these tips?

This free resource took 4+ years of real-world AI work to put together. If it helped you, I'd genuinely appreciate a recommendation on LinkedIn — and feel free to connect while you're there!

If you'd like to leave a tip or donation, consider becoming a Node Supporter ($75) of the Bitcoin Race Team USA. Every bit helps fuel the mission.

Message: Your AI workshop has been really helpful. Made a landing page/portfolio page for all my projects.
Peter gets a lot of these every day <3