What are PromptLoop Tasks?
PromptLoop Tasks are no-code AI agents that take structured inputs (like websites or search terms) and return clean, formatted data at any scale. They’re the core building blocks of the PromptLoop platform, designed to automate research that would normally take hours per prospect or company.Key Benefits
- Automate research that normally takes hours per prospect or company
- Scale instantly from a single test row to thousands of data points
- Keep results consistent with strict formats, confidence scores, and guardrails
- Process any format from millions of company page types using optimized AI models
Task Types & Capabilities
1. Website Tasks
Best when you have: A list of URLsWhat it does: Crawls websites and extracts specific data points
Example use case: Find pricing information on competitor websites
2. Search Tasks
Best when you have: Only a company name or keywordWhat it does: Searches for relevant websites first, then crawls them
Example use case: Get LinkedIn URL for “Acme Corp”
Web Browsing Capabilities
Crawl Depth Options
Choose the appropriate browsing depth based on your data requirements:Depth | How Deep It Goes | When to Use | Cost |
---|---|---|---|
Single Page | Only the exact URL provided | Data only from that specific page | Lowest |
Smart Crawl (default) | Follows relevant links a few levels deep | Most website tasks—fastest & cheapest | Standard |
Deep Research | Explores far more pages throughout the site | Hard-to-find info buried deep in sites | Higher |
Data Format Options
Tasks support multiple output formats to match your exact needs:Format | Best For | Example |
---|---|---|
Text | Descriptions & summaries | Company descriptions |
True/False | Yes/no checks | ”Does this company offer API access?” |
Number | Counts, prices, metrics | Employee count, revenue |
Link | URLs | Contact page URLs, social links |
Single Category | One label from predefined options | Industry classification |
Multiple Categories | Multi-label tagging | Services offered |
List | Multiple items (creates additional rows) | List of team members |
JSON | Complex structured data | Contact information objects |
Single Jobs vs Batch Processing
Single Jobs
Use case: Real-time processing, testing, or individual requestsHow it works: Send one input, get immediate results
Best for:
- Testing task configurations
- Real-time integrations
- Processing individual records on-demand
POST /v0/tasks/{id}
Example Request:
Batch Processing
Use case: Large-scale data processing (hundreds to thousands of inputs)How it works: Upload JSONL data, monitor progress, retrieve results when complete
Best for:
- Processing entire datasets
- Bulk research operations
- Building comprehensive databases
POST /v0/batches
(launch)GET /v0/batches/{id}
(monitor)GET /v0/batches/{id}/results
(retrieve)
1
Launch Batch Job
Submit your JSONL data to start processing multiple inputs simultaneously
2
Monitor Progress
Check job status periodically using the batch ID
3
Retrieve Results
Download completed results once processing is finished
Input & Output Structure
Input Requirements
Each task defines specific input requirements:- Website tasks: Valid URLs (e.g.,
https://example.com
) - Search tasks: Search terms or company names (e.g.,
"Acme Corp"
) - Data UUID: Unique identifier for tracking each input through processing
Output Structure
Task outputs are structured as key-value pairs:Advanced Features
Search Engine Techniques
Enhance your search tasks with standard search operators:site:domain.com
- Results only from specific site-site:domain.com
- Exclude specific siteterm1 AND term2
- Results containing both terms"exact phrase"
- Exact phrase matching
Chained Tasks
Connect multiple tasks in sequence for complex workflows:- Website Discovery: Find company website from name
- Data Extraction: Extract contact information from website
- Social Enrichment: Find LinkedIn profiles for key contacts
Webhooks for Batch Processing
Receive real-time notifications when batch jobs complete:Best Practices
Task Design
- Focus: Create tasks for specific purposes and only ask for the data you need
- Testing: Use the single job endpoint to test and iterate on task configurations
- Formatting: Select the correct output format for each data type to ensure consistency
Performance Optimization
- Start with Smart Crawl, upgrade to Deep only if data is missing
- Keep search queries generic (not company-specific) for better consistency
- Use Categories for any data you’ll filter or group on later
Error Handling
- Implement proper retry logic for rate limits (429 errors)
- Validate input data using the
/v0/batches/validate-data
endpoint - Monitor job status regularly for batch processing
Rate Limits & Constraints
Batch Limits
- File size: 100MB maximum per JSONL upload
- Processing: Subject to account usage limits
- Results: Use
output_type=stream
for large result sets
API Rate Limits
- Single jobs: 5 TPM for task execution
- Batch jobs: 4 TPM for batch creation
- Monitoring: 20 TPM for status checks