Helicone Integration
Helicone is an LLM observability platform that provides analytics, caching, and rate limiting for AI applications.
Overview
The Helicone integration enables LangGuard to:
- Import requests from Helicone
- Track LLM costs and usage
- Monitor latency and performance
- Apply governance policies
Prerequisites
- A Helicone account
- API key from Helicone
- Requests logged through Helicone proxy
Getting Your API Key
- Log in to Helicone
- Navigate to Settings > API Keys
- Create or copy your API key
Setup
Add Integration
- Navigate to Settings > Integrations in LangGuard
- Click Add Integration
- Select Helicone
- Enter credentials:
- Name: Friendly name
- API Key: Your Helicone API key
- Click Test Connection
- Configure sync settings
- Click Save
Environment Variables
HELICONE_API_KEY=your-api-key
What Gets Synced
| Helicone Data | LangGuard Mapping |
|---|---|
| Requests | Traces |
| Responses | Output data |
| Costs | Cost metrics |
| Latency | Duration |
| Properties | Metadata |
Key Features
Cost Tracking
Helicone provides detailed cost breakdowns:
- Per-request costs
- Model-level aggregation
- User-level attribution
- Organization totals
Performance Metrics
- Request latency (P50, P95, P99)
- Tokens per second
- Cache hit rates
- Error rates
Troubleshooting
No Requests Found
- Verify requests are going through Helicone proxy
- Check API key has read permissions
- Adjust time range settings
Missing Costs
- Ensure model is supported by Helicone pricing
- Check custom pricing configuration
- Verify costs are enabled in Helicone
Next Steps
- Monitoring - Track costs and performance
- Policies - Set up cost governance