human_test logohuman_test()

Let AI hire humans to test your product. Get structured, agent-parseable feedback reports in hours, not weeks.

Browse Tests

Using an AI coding tool? Install as a skill:

npx skills add avivahe326/human-test-skill

Powered by skills.sh open agent skills ecosystem

How It Works

1. Create a Test

Call human_test() from your AI agent or use the web form. AI generates a structured test plan automatically.

2. Humans Test

Real testers claim your task, use your product, and provide guided feedback with screen recording, audio narration, and NPS rating.

3. Get Structured Report

AI generates a structured report with [CRITICAL]/[MAJOR]/[MINOR] severity tags, evidence, impact analysis, and P0-P3 prioritized recommendations — designed for your AI agent to parse and act on directly.

Agent Auto-Fix: From Feedback to Code

Reports are structured so your AI agent can read them and fix issues automatically — no human in the loop needed after testing.

1

Agent calls human_test() with your product URL

2

Real humans test and submit feedback

3

Agent receives structured report via webhook

4

Agent parses [CRITICAL]/[MAJOR] issues and auto-generates fixes

Each issue in the report includes Evidence, Impact, and a specific Recommendation — giving your agent everything it needs to write a targeted fix.

For Developers

// One API call. Real human feedback.
const { taskId } = await fetch('/api/skill/human-test', {
  method: 'POST',
  headers: { 'Content-Type': 'application/json' },
  body: JSON.stringify({
    url: 'https://your-product.com',
    focus: 'Test the onboarding flow',
    maxTesters: 5,
    repoUrl: 'https://github.com/you/repo',  // get code fix suggestions
    webhookUrl: 'https://your-server.com/webhook',  // report ready notification
    codeFixWebhookUrl: 'https://your-server.com/code-fix',  // code fix ready notification
  }),
}).then(r => r.json());

// Option 1: Poll for results
const result = await fetch(`/api/skill/status/${taskId}`).then(r => r.json());
result.report  // AI-generated usability report

// Option 2: Receive webhook POST when report is ready
// → { taskId, status, title, targetUrl, report, completedAt }

For Testers

Test real products and help developers improve. Browse available tests, complete guided feedback tasks, and build your testing portfolio.

Sample Report (Agent-Parseable)

This is a real AI-generated report from our platform. Reports use consistent markdown structure with severity tags, evidence fields, and priority labels — so your AI agent can read and act on them directly:

sogou.com1 TesterNPS: 8.0/102 Critical · 2 Major · 1 Minor

Executive Summary

A single usability session with tester lizhi revealed a critical infrastructure failure — the bare nginx 404 on sogo.com — that would prevent any new user from reaching the product. The tester exited to a competitor (360 Search) within 78 seconds without completing a single goal-directed search.

Issues (5)

CRITICAL

Dead URL at test entry point — bare nginx 404 on sogo.com

Evidencelizhi landed on a bare nginx 404 at sogo.com for the first ~9 seconds. Recovery only happened because the tester already knew the correct domain (sogou.com).
ImpactFirst-time users have zero recovery path — no branded error page, no redirect, no suggestion. 100% bounce rate for mistyped URLs.
RecommendationImplement a 301 redirect from sogo.com → sogou.com, or deploy a branded error page with "Did you mean sogou.com?" recovery link.
CRITICAL

Product failed to retain user — exited to competitor within 78 seconds

Evidencelizhi left the product twice: at ~48s to a 163.com article about search engine alternatives, and at ~72s to competitor 360 Search (so.com).
ImpactFor a search engine, losing users to a competitor article within the first session is a core retention failure.
RecommendationAdd task scaffolding to the homepage — featured search prompts, category entry points, or a value proposition that gives users a reason to stay.

human_test()

Real human feedback for AI-built products.