last30days
Research a topic from the last 30 days on Reddit + X + Web, become an expert, and write copy-paste-ready prompts for the user's target tool.
last30days: Research Any Topic from the Last 30 Days
Research ANY topic across Reddit, X, and the web. Surface what people are actually discussing, recommending, and debating right now.
Use cases:
CRITICAL: Parse User Intent
Before doing anything, parse the user's input for:
- PROMPTING - "X prompts", "prompting for X", "X best practices" → User wants to learn techniques and get copy-paste prompts
- RECOMMENDATIONS - "best X", "top X", "what X should I use", "recommended X" → User wants a LIST of specific things
- NEWS - "what's happening with X", "X news", "latest on X" → User wants current events/updates
- GENERAL - anything else → User wants broad understanding of the topic
Common patterns:
[topic] for [tool] → "web mockups for Nano Banana Pro" → TOOL IS SPECIFIED[topic] prompts for [tool] → "UI design prompts for Midjourney" → TOOL IS SPECIFIED[topic] → "iOS design mockups" → TOOL NOT SPECIFIED, that's OKIMPORTANT: Do NOT ask about target tool before research.
Store these variables:
TOPIC = [extracted topic]TARGET_TOOL = [extracted tool, or "unknown" if not specified]QUERY_TYPE = [RECOMMENDATIONS | NEWS | HOW-TO | GENERAL]Setup Check
The skill works in three modes based on available API keys:
API keys are OPTIONAL. The skill will work without them using WebSearch fallback.
First-Time Setup (Optional but Recommended)
If the user wants to add API keys for better results:
mkdir -p ~/.config/last30days
cat > ~/.config/last30days/.env << 'ENVEOF'
last30days API Configuration
Both keys are optional - skill works with WebSearch fallback
For Reddit research (uses OpenAI's web_search tool)
OPENAI_API_KEY=For X/Twitter research (uses xAI's x_search tool)
XAI_API_KEY=
ENVEOFchmod 600 ~/.config/last30days/.env
echo "Config created at ~/.config/last30days/.env"
echo "Edit to add your API keys for enhanced research."
DO NOT stop if no keys are configured. Proceed with web-only mode.
Research Execution
IMPORTANT: The script handles API key detection automatically. Run it and check the output to determine mode.
Step 1: Run the research script
python3 ~/.claude/skills/last30days/scripts/last30days.py "$ARGUMENTS" --emit=compact 2>&1The script will automatically:
Step 2: Check the output mode
The script output will indicate the mode:
Step 3: Do WebSearch
For ALL modes, do WebSearch to supplement (or provide all data in web-only mode).
Choose search queries based on QUERY_TYPE:
If RECOMMENDATIONS ("best X", "top X", "what X should I use"):
best {TOPIC} recommendations{TOPIC} list examplesmost popular {TOPIC}If NEWS ("what's happening with X", "X news"):
{TOPIC} news 2026{TOPIC} announcement updateIf PROMPTING ("X prompts", "prompting for X"):
{TOPIC} prompts examples 2026{TOPIC} techniques tipsIf GENERAL (default):
{TOPIC} 2026{TOPIC} discussionFor ALL query types:
- If user says "ChatGPT image prompting", search for "ChatGPT image prompting"
- Do NOT add "DALL-E", "GPT-4o", or other terms you think are related
- Your knowledge may be outdated - trust the user's terminology
Step 3: Wait for background script to complete
Use TaskOutput to get the script results before proceeding to synthesis.
Depth options (passed through from user's command):
--quick → Faster, fewer sources (8-12 each)--deep → Comprehensive (50-70 Reddit, 40-60 X)Judge Agent: Synthesize All Sources
After all searches complete, internally synthesize (don't display stats yet):
The Judge Agent must:
Do NOT display stats here - they come at the end, right before the invitation.
FIRST: Internalize the Research
CRITICAL: Ground your synthesis in the ACTUAL research content, not your pre-existing knowledge.
Read the research output carefully. Pay attention to:
ANTI-PATTERN TO AVOID: If user asks about "clawdbot skills" and research returns ClawdBot content (self-hosted AI agent), do NOT synthesize this as "Claude Code skills" just because both involve "skills". Read what the research actually says.
If QUERY_TYPE = RECOMMENDATIONS
CRITICAL: Extract SPECIFIC NAMES, not generic patterns.
When user asks "best X" or "top X", they want a LIST of specific things:
BAD synthesis for "best Claude Code skills":
> "Skills are powerful. Keep them under 500 lines. Use progressive disclosure."
GOOD synthesis for "best Claude Code skills":
> "Most mentioned skills: /commit (5 mentions), remotion skill (4x), git-worktree (3x), /pr (3x). The Remotion announcement got 16K likes on X."
For all QUERY_TYPEs
Identify from the ACTUAL RESEARCH OUTPUT:
If research says "use JSON prompts" or "structured prompts", you MUST deliver prompts in that format later.
THEN: Show Summary + Invite Vision
CRITICAL: Do NOT output any "Sources:" lists. The final display should be clean.
Display in this EXACT sequence:
FIRST - What I learned (based on QUERY_TYPE):
If RECOMMENDATIONS - Show specific things mentioned:
🏆 Most mentioned:
[Specific name] - mentioned {n}x (r/sub, @handle, blog.com)
[Specific name] - mentioned {n}x (sources)
[Specific name] - mentioned {n}x (sources)
[Specific name] - mentioned {n}x (sources)
[Specific name] - mentioned {n}x (sources) Notable mentions: [other specific things with 1-2 mentions]
If PROMPTING/NEWS/GENERAL - Show synthesis and patterns:
What I learned:[2-4 sentences synthesizing key insights FROM THE ACTUAL RESEARCH OUTPUT.]
KEY PATTERNS I'll use:
[Pattern from research]
[Pattern from research]
[Pattern from research] THEN - Stats (right before invitation):
For full/partial mode (has API keys):
---
✅ All agents reported back!
├─ 🟠 Reddit: {n} threads │ {sum} upvotes │ {sum} comments
├─ 🔵 X: {n} posts │ {sum} likes │ {sum} reposts
├─ 🌐 Web: {n} pages │ {domains}
└─ Top voices: r/{sub1}, r/{sub2} │ @{handle1}, @{handle2} │ {web_author} on {site}For web-only mode (no API keys):
---
✅ Research complete!
├─ 🌐 Web: {n} pages │ {domains}
└─ Top sources: {author1} on {site1}, {author2} on {site2}💡 Want engagement metrics? Add API keys to ~/.config/last30days/.env
- OPENAI_API_KEY → Reddit (real upvotes & comments)
- XAI_API_KEY → X/Twitter (real likes & reposts)
LAST - Invitation:
---
Share your vision for what you want to create and I'll write a thoughtful prompt you can copy-paste directly into {TARGET_TOOL}.Use real numbers from the research output. The patterns should be actual insights from the research, not generic advice.
SELF-CHECK before displaying: Re-read your "What I learned" section. Does it match what the research ACTUALLY says? If the research was about ClawdBot (a self-hosted AI agent), your summary should be about ClawdBot, not Claude Code. If you catch yourself projecting your own knowledge instead of the research, rewrite it.
IF TARGET_TOOL is still unknown after showing results, ask NOW (not before research):
What tool will you use these prompts with?Options:
[Most relevant tool based on research - e.g., if research mentioned Figma/Sketch, offer those]
Nano Banana Pro (image generation)
ChatGPT / Claude (text/code)
Other (tell me) IMPORTANT: After displaying this, WAIT for the user to respond. Don't dump generic prompts.
WAIT FOR USER'S VISION
After showing the stats summary with your invitation, STOP and wait for the user to tell you what they want to create.
When they respond with their vision (e.g., "I want a landing page mockup for my SaaS app"), THEN write a single, thoughtful, tailored prompt.
WHEN USER SHARES THEIR VISION: Write ONE Perfect Prompt
Based on what they want to create, write a single, highly-tailored prompt using your research expertise.
CRITICAL: Match the FORMAT the research recommends
If research says to use a specific prompt FORMAT, YOU MUST USE THAT FORMAT:
ANTI-PATTERN: Research says "use JSON prompts with device specs" but you write plain prose. This defeats the entire purpose of the research.
Output Format:
Here's your prompt for {TARGET_TOOL}:
[The actual prompt IN THE FORMAT THE RESEARCH RECOMMENDS - if research said JSON, this is JSON. If research said natural language, this is prose. Match what works.]
This uses [brief 1-line explanation of what research insight you applied].
Quality Checklist:
IF USER ASKS FOR MORE OPTIONS
Only if they ask for alternatives or more prompts, provide 2-3 variations. Don't dump a prompt pack unless requested.
AFTER EACH PROMPT: Stay in Expert Mode
After delivering a prompt, offer to write more:
> Want another prompt? Just tell me what you're creating next.
CONTEXT MEMORY
For the rest of this conversation, remember:
CRITICAL: After research is complete, you are now an EXPERT on this topic.
When the user asks follow-up questions:
Only do new research if the user explicitly asks about a DIFFERENT topic.
Output Summary Footer (After Each Prompt)
After delivering a prompt, end with:
For full/partial mode:
---
📚 Expert in: {TOPIC} for {TARGET_TOOL}
📊 Based on: {n} Reddit threads ({sum} upvotes) + {n} X posts ({sum} likes) + {n} web pagesWant another prompt? Just tell me what you're creating next.
For web-only mode:
---
📚 Expert in: {TOPIC} for {TARGET_TOOL}
📊 Based on: {n} web pages from {domains}Want another prompt? Just tell me what you're creating next.
💡 Unlock Reddit & X data: Add API keys to ~/.config/last30days/.env