Skip to content

💡 TL;DR — Routine shows that explicit JSON plans + clean parameter passing slash tool-call errors and unlock enterprise-grade LLM agents. Use this guide to turn the paper into a working marketing framework.


🗺️ How to Navigate This Page

  1. Watch the 3-min explainer (video at the top).
  2. Pre-read the PDF — even a skim helps: Routine research paper.
  3. Open the interactive Notebookhttps://notebooklm.google.com/notebook/09062800-9d65-4a44-bf03-3f29b2e6eb38 to explore highlights, Q&A, and key mind maps.
  4. Dive deeper with our contextual articles:
  5. Join the discussion — add questions or vote on build ideas via the comment widget at the bottom.
  6. Code the challenge — scroll to Build-In-Public Sprint and pick a task.

🛠️ Build-In-Public Sprint

Goal: ship a Routine-style agent that auto-pauses poor-ROAS ad sets on Meta in under 14 days.

StepWhat to DeliverHints
1Fork the starter repoUses Mastra TS + LangGraph
2Draft JSON plan (max 5 steps)See Figure 3 in the paper
3Implement Executor & Meta Marketing API callsRe-use our typed wrappers
4Log state→action→reward tuplesRay RLlib buffer template included
5Record a 90-sec Loom demo & share in commentsTop 3 voted demos get featured

🏁 Ready to level-up?

Join the Routine × Mastra Hackathon → · Register on Luma ↗

Timeline
• Submit by 8 Aug • Demo Day 11 Aug
Starter templates & helper code already in the repo — just fork and build!


🤔 Discussion Prompts

  • How might Routine’s plan/execution split improve auditability versus classic prompt-chains?
  • Would you distill to a 7B model or keep GPT-4o as Planner? Trade-offs?
  • Which KPIs make best reward signals across Search / DV360 / AMC?

📚 Further Reading


🚀 Ready? Grab the repo, join the chat, and let’s build the next generation of performance agents — together.

Discussion & Idea Voting

Up-vote next week’s build idea by reacting with 👍 to any comment.

Published on Thursday, July 31, 2025