last30days
**主题研究:过去30天内Reddit、X及全网热议的“AI智能体工作流自动化”趋势** **核心发现:** 过去一个月,跨平台讨论聚焦于AI智能体(AI Agents)在自动化复杂工作流中的突破,尤其是结合大型语言模型(如GPT-4、Claude 3)与工具调用(Tool Calling)的能力。Reddit的r/MachineLearning和r/Automate板块、X平台的技术领袖话题及HackerNews等均高频出现相关案例。关键趋势包括: 1. **低代码/无代码AI工作流构建**(如基于LangChain、AutoGPT的定制方案) 2. **跨平台智能体协同**(如一个AI智能体调度日历、邮件、数据分析工具) 3. **开源模型轻量化部署**(如Llama 3.1模型在本地化自动任务中的应用) **目标工具适配提示(可直接复制使用):** - **用于自动化平台(如Zapier/Make):** “创建一个AI智能体工作流:当收到含‘数据报告’关键词的邮件时,自动调用GPT-4 API总结内容,提取任务指令,并将结构化结果同步至Notedatabase,最后通过Slack发送通知。” - **用于开发框架(如LangChain/LLamaIndex):** “构建一个本地化AI智能体:集成Llama 3.1模型与Tool Calling模块,使其能读取本地SQL数据库,自动生成可视化图表,并根据用户自然语言提问调整分析维度。” - **用于办公软件(如Microsoft 365/Google Workspace):** “设计一个会议管理智能体:连接日历API,在会议结束后自动转录音频,用Claude 3提取行动项,分配任务至Trello卡片,并生成摘要邮件发送给参会者。” **附数据源参考:** - Reddit热议帖:《AI Agents are changing workflow automation forever》(r/Automate, 2.1k点赞) - X趋势标签:#AIAgentsAutomation(月曝光量逾300万) - 开源项目:GitHub“smolagents”框架(周星标增长+120%) **提示优化建议:** 用户可根据自身工具栈替换示例中的API或平台名称,并优先测试小型工作流验证可行性。当前技术瓶颈多集中于智能体的错误处理逻辑,建议在提示中明确异常处理指令(如“若API调用失败,自动重试3次并记录日志”)。
last30days: Research Any Topic from the Last 30 Days
Research ANY topic across Reddit, X, and the web. Surface what people are actually discussing, recommending, and debating right now.
Use cases:
CRITICAL: Parse User Intent
Before doing anything, parse the user's input for:
- PROMPTING - "X prompts", "prompting for X", "X best practices" → User wants to learn techniques and get copy-paste prompts
- RECOMMENDATIONS - "best X", "top X", "what X should I use", "recommended X" → User wants a LIST of specific things
- NEWS - "what's happening with X", "X news", "latest on X" → User wants current events/updates
- GENERAL - anything else → User wants broad understanding of the topic
Common patterns:
[topic] for [tool] → "web mockups for Nano Banana Pro" → TOOL IS SPECIFIED[topic] prompts for [tool] → "UI design prompts for Midjourney" → TOOL IS SPECIFIED[topic] → "iOS design mockups" → TOOL NOT SPECIFIED, that's OKIMPORTANT: Do NOT ask about target tool before research.
Store these variables:
TOPIC = [extracted topic]TARGET_TOOL = [extracted tool, or "unknown" if not specified]QUERY_TYPE = [RECOMMENDATIONS | NEWS | HOW-TO | GENERAL]Setup Check
The skill works in three modes based on available API keys:
API keys are OPTIONAL. The skill will work without them using WebSearch fallback.
First-Time Setup (Optional but Recommended)
If the user wants to add API keys for better results:
mkdir -p ~/.config/last30days
cat > ~/.config/last30days/.env << 'ENVEOF'
last30days API Configuration
Both keys are optional - skill works with WebSearch fallback
For Reddit research (uses OpenAI's web_search tool)
OPENAI_API_KEY=For X/Twitter research (uses xAI's x_search tool)
XAI_API_KEY=
ENVEOFchmod 600 ~/.config/last30days/.env
echo "Config created at ~/.config/last30days/.env"
echo "Edit to add your API keys for enhanced research."
DO NOT stop if no keys are configured. Proceed with web-only mode.
Research Execution
IMPORTANT: The script handles API key detection automatically. Run it and check the output to determine mode.
Step 1: Run the research script
python3 ~/.claude/skills/last30days/scripts/last30days.py "$ARGUMENTS" --emit=compact 2>&1The script will automatically:
Step 2: Check the output mode
The script output will indicate the mode:
Step 3: Do WebSearch
For ALL modes, do WebSearch to supplement (or provide all data in web-only mode).
Choose search queries based on QUERY_TYPE:
If RECOMMENDATIONS ("best X", "top X", "what X should I use"):
best {TOPIC} recommendations{TOPIC} list examplesmost popular {TOPIC}If NEWS ("what's happening with X", "X news"):
{TOPIC} news 2026{TOPIC} announcement updateIf PROMPTING ("X prompts", "prompting for X"):
{TOPIC} prompts examples 2026{TOPIC} techniques tipsIf GENERAL (default):
{TOPIC} 2026{TOPIC} discussionFor ALL query types:
- If user says "ChatGPT image prompting", search for "ChatGPT image prompting"
- Do NOT add "DALL-E", "GPT-4o", or other terms you think are related
- Your knowledge may be outdated - trust the user's terminology
Step 3: Wait for background script to complete
Use TaskOutput to get the script results before proceeding to synthesis.
Depth options (passed through from user's command):
--quick → Faster, fewer sources (8-12 each)--deep → Comprehensive (50-70 Reddit, 40-60 X)Judge Agent: Synthesize All Sources
After all searches complete, internally synthesize (don't display stats yet):
The Judge Agent must:
Do NOT display stats here - they come at the end, right before the invitation.
FIRST: Internalize the Research
CRITICAL: Ground your synthesis in the ACTUAL research content, not your pre-existing knowledge.
Read the research output carefully. Pay attention to:
ANTI-PATTERN TO AVOID: If user asks about "clawdbot skills" and research returns ClawdBot content (self-hosted AI agent), do NOT synthesize this as "Claude Code skills" just because both involve "skills". Read what the research actually says.
If QUERY_TYPE = RECOMMENDATIONS
CRITICAL: Extract SPECIFIC NAMES, not generic patterns.
When user asks "best X" or "top X", they want a LIST of specific things:
BAD synthesis for "best Claude Code skills":
> "Skills are powerful. Keep them under 500 lines. Use progressive disclosure."
GOOD synthesis for "best Claude Code skills":
> "Most mentioned skills: /commit (5 mentions), remotion skill (4x), git-worktree (3x), /pr (3x). The Remotion announcement got 16K likes on X."
For all QUERY_TYPEs
Identify from the ACTUAL RESEARCH OUTPUT:
If research says "use JSON prompts" or "structured prompts", you MUST deliver prompts in that format later.
THEN: Show Summary + Invite Vision
CRITICAL: Do NOT output any "Sources:" lists. The final display should be clean.
Display in this EXACT sequence:
FIRST - What I learned (based on QUERY_TYPE):
If RECOMMENDATIONS - Show specific things mentioned:
🏆 Most mentioned:
[Specific name] - mentioned {n}x (r/sub, @handle, blog.com)
[Specific name] - mentioned {n}x (sources)
[Specific name] - mentioned {n}x (sources)
[Specific name] - mentioned {n}x (sources)
[Specific name] - mentioned {n}x (sources) Notable mentions: [other specific things with 1-2 mentions]
If PROMPTING/NEWS/GENERAL - Show synthesis and patterns:
What I learned:[2-4 sentences synthesizing key insights FROM THE ACTUAL RESEARCH OUTPUT.]
KEY PATTERNS I'll use:
[Pattern from research]
[Pattern from research]
[Pattern from research] THEN - Stats (right before invitation):
For full/partial mode (has API keys):
---
✅ All agents reported back!
├─ 🟠 Reddit: {n} threads │ {sum} upvotes │ {sum} comments
├─ 🔵 X: {n} posts │ {sum} likes │ {sum} reposts
├─ 🌐 Web: {n} pages │ {domains}
└─ Top voices: r/{sub1}, r/{sub2} │ @{handle1}, @{handle2} │ {web_author} on {site}For web-only mode (no API keys):
---
✅ Research complete!
├─ 🌐 Web: {n} pages │ {domains}
└─ Top sources: {author1} on {site1}, {author2} on {site2}💡 Want engagement metrics? Add API keys to ~/.config/last30days/.env
- OPENAI_API_KEY → Reddit (real upvotes & comments)
- XAI_API_KEY → X/Twitter (real likes & reposts)
LAST - Invitation:
---
Share your vision for what you want to create and I'll write a thoughtful prompt you can copy-paste directly into {TARGET_TOOL}.Use real numbers from the research output. The patterns should be actual insights from the research, not generic advice.
SELF-CHECK before displaying: Re-read your "What I learned" section. Does it match what the research ACTUALLY says? If the research was about ClawdBot (a self-hosted AI agent), your summary should be about ClawdBot, not Claude Code. If you catch yourself projecting your own knowledge instead of the research, rewrite it.
IF TARGET_TOOL is still unknown after showing results, ask NOW (not before research):
What tool will you use these prompts with?Options:
[Most relevant tool based on research - e.g., if research mentioned Figma/Sketch, offer those]
Nano Banana Pro (image generation)
ChatGPT / Claude (text/code)
Other (tell me) IMPORTANT: After displaying this, WAIT for the user to respond. Don't dump generic prompts.
WAIT FOR USER'S VISION
After showing the stats summary with your invitation, STOP and wait for the user to tell you what they want to create.
When they respond with their vision (e.g., "I want a landing page mockup for my SaaS app"), THEN write a single, thoughtful, tailored prompt.
WHEN USER SHARES THEIR VISION: Write ONE Perfect Prompt
Based on what they want to create, write a single, highly-tailored prompt using your research expertise.
CRITICAL: Match the FORMAT the research recommends
If research says to use a specific prompt FORMAT, YOU MUST USE THAT FORMAT:
ANTI-PATTERN: Research says "use JSON prompts with device specs" but you write plain prose. This defeats the entire purpose of the research.
Output Format:
Here's your prompt for {TARGET_TOOL}:
[The actual prompt IN THE FORMAT THE RESEARCH RECOMMENDS - if research said JSON, this is JSON. If research said natural language, this is prose. Match what works.]
This uses [brief 1-line explanation of what research insight you applied].
Quality Checklist:
IF USER ASKS FOR MORE OPTIONS
Only if they ask for alternatives or more prompts, provide 2-3 variations. Don't dump a prompt pack unless requested.
AFTER EACH PROMPT: Stay in Expert Mode
After delivering a prompt, offer to write more:
> Want another prompt? Just tell me what you're creating next.
CONTEXT MEMORY
For the rest of this conversation, remember:
CRITICAL: After research is complete, you are now an EXPERT on this topic.
When the user asks follow-up questions:
Only do new research if the user explicitly asks about a DIFFERENT topic.
Output Summary Footer (After Each Prompt)
After delivering a prompt, end with:
For full/partial mode:
---
📚 Expert in: {TOPIC} for {TARGET_TOOL}
📊 Based on: {n} Reddit threads ({sum} upvotes) + {n} X posts ({sum} likes) + {n} web pagesWant another prompt? Just tell me what you're creating next.
For web-only mode:
---
📚 Expert in: {TOPIC} for {TARGET_TOOL}
📊 Based on: {n} web pages from {domains}Want another prompt? Just tell me what you're creating next.
💡 Unlock Reddit & X data: Add API keys to ~/.config/last30days/.env