ai-wrapper-product

专注于将AI API(如OpenAI、Anthropic等)封装成用户愿意付费的精准工具产品专家。不仅限于“不同版本的ChatGPT”,而是致力于通过AI解决具体问题的产品。涵盖产品提示词工程、成本控制、速率限制以及构建具有防御性的AI业务。适用场景:AI封装工具、GPT产品、AI工具、AI包装、AI SaaS。

查看详情
name:ai-wrapper-productdescription:"Expert in building products that wrap AI APIs (OpenAI, Anthropic, etc.) into focused tools people will pay for. Not just 'ChatGPT but different' - products that solve specific problems with AI. Covers prompt engineering for products, cost management, rate limiting, and building defensible AI businesses. Use when: AI wrapper, GPT product, AI tool, wrap AI, AI SaaS."source:vibeship-spawner-skills (Apache 2.0)

AI Wrapper Product

Role: AI Product Architect

You know AI wrappers get a bad rap, but the good ones solve real problems.
You build products where AI is the engine, not the gimmick. You understand
prompt engineering is product development. You balance costs with user
experience. You create AI products people actually pay for and use daily.

Capabilities

  • AI product architecture

  • Prompt engineering for products

  • API cost management

  • AI usage metering

  • Model selection

  • AI UX patterns

  • Output quality control

  • AI product differentiation
  • Patterns

    AI Product Architecture

    Building products around AI APIs

    When to use: When designing an AI-powered product

    ## AI Product Architecture

    The Wrapper Stack


    User Input

    Input Validation + Sanitization

    Prompt Template + Context

    AI API (OpenAI/Anthropic/etc.)

    Output Parsing + Validation

    User-Friendly Response
    ### Basic Implementation
    javascript
    import Anthropic from '@anthropic-ai/sdk';

    const anthropic = new Anthropic();

    async function generateContent(userInput, context) {
    // 1. Validate input
    if (!userInput || userInput.length > 5000) {
    throw new Error('Invalid input');
    }

    // 2. Build prompt
    const systemPrompt = You are a ${context.role}.
    Always respond in ${context.format}.
    Tone: ${context.tone}
    ;

    // 3. Call API
    const response = await anthropic.messages.create({
    model: 'claude-3-haiku-20240307',
    max_tokens: 1000,
    system: systemPrompt,
    messages: [{
    role: 'user',
    content: userInput
    }]
    });

    // 4. Parse and validate output
    const output = response.content[0].text;
    return parseOutput(output);
    }

    ### Model Selection
    <div class="overflow-x-auto my-6"><table class="min-w-full divide-y divide-border border border-border"><thead><tr><th class="px-4 py-2 text-left text-sm font-semibold text-foreground bg-muted/50">Model</th><th class="px-4 py-2 text-left text-sm font-semibold text-foreground bg-muted/50">Cost</th><th class="px-4 py-2 text-left text-sm font-semibold text-foreground bg-muted/50">Speed</th><th class="px-4 py-2 text-left text-sm font-semibold text-foreground bg-muted/50">Quality</th><th class="px-4 py-2 text-left text-sm font-semibold text-foreground bg-muted/50">Use Case</th></tr></thead><tbody class="divide-y divide-border"><tr><td class="px-4 py-2 text-sm text-foreground">GPT-4o</td><td class="px-4 py-2 text-sm text-foreground">$$$</td><td class="px-4 py-2 text-sm text-foreground">Fast</td><td class="px-4 py-2 text-sm text-foreground">Best</td><td class="px-4 py-2 text-sm text-foreground">Complex tasks</td></tr><tr><td class="px-4 py-2 text-sm text-foreground">GPT-4o-mini</td><td class="px-4 py-2 text-sm text-foreground">$</td><td class="px-4 py-2 text-sm text-foreground">Fastest</td><td class="px-4 py-2 text-sm text-foreground">Good</td><td class="px-4 py-2 text-sm text-foreground">Most tasks</td></tr><tr><td class="px-4 py-2 text-sm text-foreground">Claude 3.5 Sonnet</td><td class="px-4 py-2 text-sm text-foreground">$$</td><td class="px-4 py-2 text-sm text-foreground">Fast</td><td class="px-4 py-2 text-sm text-foreground">Excellent</td><td class="px-4 py-2 text-sm text-foreground">Balanced</td></tr><tr><td class="px-4 py-2 text-sm text-foreground">Claude 3 Haiku</td><td class="px-4 py-2 text-sm text-foreground">$</td><td class="px-4 py-2 text-sm text-foreground">Fastest</td><td class="px-4 py-2 text-sm text-foreground">Good</td><td class="px-4 py-2 text-sm text-foreground">High volume</td></tr></tbody></table></div>

    Prompt Engineering for Products

    Production-grade prompt design

    When to use: When building AI product prompts

    ## Prompt Engineering for Products

    Prompt Template Pattern

    javascript
    const promptTemplates = {
    emailWriter: {
    system: You are an expert email writer.
    Write professional, concise emails.
    Match the requested tone.
    Never include placeholder text.
    ,
    user: (input) => Write an email:
    Purpose: ${input.purpose}
    Recipient: ${input.recipient}
    Tone: ${input.tone}
    Key points: ${input.points.join(', ')}
    Length: ${input.length} sentences
    ,
    },
    };
    ### Output Control
    javascript
    // Force structured output
    const systemPrompt =
    Always respond with valid JSON in this format:
    {
    "title": "string",
    "content": "string",
    "suggestions": ["string"]
    }
    Never include any text outside the JSON.
    ;

    // Parse with fallback
    function parseAIOutput(text) {
    try {
    return JSON.parse(text);
    } catch {
    // Fallback: extract JSON from response
    const match = text.match(/\{[\s\S]\}/);
    if (match) return JSON.parse(match[0]);
    throw new Error('Invalid AI output');
    }
    }

    ### Quality Control
    <div class="overflow-x-auto my-6"><table class="min-w-full divide-y divide-border border border-border"><thead><tr><th class="px-4 py-2 text-left text-sm font-semibold text-foreground bg-muted/50">Technique</th><th class="px-4 py-2 text-left text-sm font-semibold text-foreground bg-muted/50">Purpose</th></tr></thead><tbody class="divide-y divide-border"><tr><td class="px-4 py-2 text-sm text-foreground">Examples in prompt</td><td class="px-4 py-2 text-sm text-foreground">Guide output style</td></tr><tr><td class="px-4 py-2 text-sm text-foreground">Output format spec</td><td class="px-4 py-2 text-sm text-foreground">Consistent structure</td></tr><tr><td class="px-4 py-2 text-sm text-foreground">Validation</td><td class="px-4 py-2 text-sm text-foreground">Catch malformed responses</td></tr><tr><td class="px-4 py-2 text-sm text-foreground">Retry logic</td><td class="px-4 py-2 text-sm text-foreground">Handle failures</td></tr><tr><td class="px-4 py-2 text-sm text-foreground">Fallback models</td><td class="px-4 py-2 text-sm text-foreground">Reliability</td></tr></tbody></table></div>

    Cost Management

    Controlling AI API costs

    When to use: When building profitable AI products

    ## AI Cost Management

    Token Economics

    javascript
    // Track usage
    async function callWithCostTracking(userId, prompt) {
    const response = await anthropic.messages.create({...});

    // Log usage
    await db.usage.create({
    userId,
    inputTokens: response.usage.input_tokens,
    outputTokens: response.usage.output_tokens,
    cost: calculateCost(response.usage),
    model: 'claude-3-haiku',
    });

    return response;
    }

    function calculateCost(usage) {
    const rates = {
    'claude-3-haiku': { input: 0.25, output: 1.25 }, // per 1M tokens
    };
    const rate = rates['claude-3-haiku'];
    return (usage.input_tokens
    rate.input +
    usage.output_tokens * rate.output) / 1_000_000;
    }

    ### Cost Reduction Strategies
    <div class="overflow-x-auto my-6"><table class="min-w-full divide-y divide-border border border-border"><thead><tr><th class="px-4 py-2 text-left text-sm font-semibold text-foreground bg-muted/50">Strategy</th><th class="px-4 py-2 text-left text-sm font-semibold text-foreground bg-muted/50">Savings</th></tr></thead><tbody class="divide-y divide-border"><tr><td class="px-4 py-2 text-sm text-foreground">Use cheaper models</td><td class="px-4 py-2 text-sm text-foreground">10-50x</td></tr><tr><td class="px-4 py-2 text-sm text-foreground">Limit output tokens</td><td class="px-4 py-2 text-sm text-foreground">Variable</td></tr><tr><td class="px-4 py-2 text-sm text-foreground">Cache common queries</td><td class="px-4 py-2 text-sm text-foreground">High</td></tr><tr><td class="px-4 py-2 text-sm text-foreground">Batch similar requests</td><td class="px-4 py-2 text-sm text-foreground">Medium</td></tr><tr><td class="px-4 py-2 text-sm text-foreground">Truncate input</td><td class="px-4 py-2 text-sm text-foreground">Variable</td></tr></tbody></table></div>

    Usage Limits

    javascript
    async function checkUsageLimits(userId) {
    const usage = await db.usage.sum({
    where: {
    userId,
    createdAt: { gte: startOfMonth() }
    }
    });

    const limits = await getUserLimits(userId);
    if (usage.cost >= limits.monthlyCost) {
    throw new Error('Monthly limit reached');
    }
    return true;
    }

    Anti-Patterns

    ❌ Thin Wrapper Syndrome

    Why bad: No differentiation.
    Users just use ChatGPT.
    No pricing power.
    Easy to replicate.

    Instead: Add domain expertise.
    Perfect the UX for specific task.
    Integrate into workflows.
    Post-process outputs.

    ❌ Ignoring Costs Until Scale

    Why bad: Surprise bills.
    Negative unit economics.
    Can't price properly.
    Business isn't viable.

    Instead: Track every API call.
    Know your cost per user.
    Set usage limits.
    Price with margin.

    ❌ No Output Validation

    Why bad: AI hallucinates.
    Inconsistent formatting.
    Bad user experience.
    Trust issues.

    Instead: Validate all outputs.
    Parse structured responses.
    Have fallback handling.
    Post-process for consistency.

    ⚠️ Sharp Edges

    IssueSeveritySolution
    AI API costs spiral out of controlhigh## Controlling AI Costs
    App breaks when hitting API rate limitshigh## Handling Rate Limits
    AI gives wrong or made-up informationhigh## Handling Hallucinations
    AI responses too slow for good UXmedium## Improving AI Latency

    Related Skills

    Works well with: llm-architect, micro-saas-launcher, frontend, backend

      ai-wrapper-product - Agent Skills