provide_performance_feedback
Share performance outcomes with publishers to enable data-driven optimization and improved campaign delivery. Response Time: ~5 seconds (data ingestion) Request Schema:https://adcontextprotocol.org/schemas/v1/media-buy/provide-performance-feedback-request.json
Response Schema: https://adcontextprotocol.org/schemas/v1/media-buy/provide-performance-feedback-response.json
Request Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
media_buy_id | string | Yes | Publisher’s media buy identifier |
measurement_period | object | Yes | Time period for performance measurement |
performance_index | number | Yes | Normalized performance score (0.0 = no value, 1.0 = expected, >1.0 = above expected) |
package_id | string | No | Specific package within the media buy (if feedback is package-specific) |
creative_id | string | No | Specific creative asset (if feedback is creative-specific) |
metric_type | string | No | The business metric being measured (defaults to “overall_performance”) |
feedback_source | string | No | Source of the performance data (defaults to “buyer_attribution”) |
Response (Message)
The response includes a human-readable message that:- Confirms receipt of the performance feedback
- Summarizes the performance level provided
- Explains how the feedback will be used for optimization
- Provides next steps or recommendations
- MCP: Returned as a
messagefield in the JSON response - A2A: Returned as a text part in the artifact
Response (Payload)
Field Descriptions
- success: Whether the performance feedback was successfully received
- message: Optional human-readable message about the feedback processing
Protocol-Specific Examples
The AdCP payload is identical across protocols. Only the request/response wrapper differs.MCP Request
MCP Response
A2A Request
Natural Language Invocation
Explicit Skill Invocation
A2A Response
A2A returns results as artifacts:Key Differences
- MCP: Direct tool call with arguments, returns flat JSON response
- A2A: Skill invocation with input, returns artifacts with text and data parts
- Payload: The
inputfield in A2A contains the exact same structure as MCP’sarguments
Scenarios
Example 1: Campaign-Level Performance Feedback
Request
Response - Below Expected Performance
Message: “Performance feedback received for campaign gam_1234567890. The 15% below-expected brand lift suggests targeting refinement is needed. Our optimization algorithms will reduce spend on underperforming segments starting with the next cycle.” Payload:Example 2: Package-Specific Performance Feedback
Request
Response - Exceptional Performance
Message: “Outstanding performance feedback for package pkg_social_feed! The 110% above-expected click-through rate indicates this audience segment is highly engaged. We’ll increase allocation to similar inventory and audiences.” Payload:Example 3: Creative-Specific Performance Feedback
Request
Response - Poor Creative Performance
Message: “Creative creative_video_123 shows 35% below-expected completion rate. Consider creative refresh or A/B testing alternative versions.” Payload:Example 4: Multiple Performance Metrics (Future)
Request - Batch Feedback (Not Implemented Yet)
Performance Index Scale
The performance index provides a normalized way to communicate business outcomes:- 0.0: No measurable value or impact
- 0.5: Significantly below expectations (-50%)
- 1.0: Meets baseline expectations (0% variance)
- 1.5: Exceeds expectations by 50%
- 2.0+: Exceptional performance (100%+ above expected)
Common Metric Types
- overall_performance: General campaign success (default)
- conversion_rate: Post-click or post-view conversions
- brand_lift: Brand awareness or consideration lift
- click_through_rate: Engagement with creative
- completion_rate: Video or audio completion rates
- viewability: Viewable impression rate
- brand_safety: Brand safety compliance
- cost_efficiency: Cost per desired outcome
Feedback Sources
- buyer_attribution: Buyer’s own measurement and attribution
- third_party_measurement: Independent measurement partner
- platform_analytics: Publisher platform’s analytics
- verification_partner: Third-party verification service
How Publishers Use Performance Feedback
Publishers leverage performance indices to:- Optimize Targeting: Shift impressions to high-performing segments and audiences
- Improve Inventory: Identify and prioritize high-value placements
- Adjust Pricing: Update CPMs based on proven value delivery
- Enhance Algorithms: Train machine learning models on actual business outcomes
- Product Development: Refine product definitions based on performance patterns
Usage Notes
- Performance feedback is optional but highly valuable for optimization
- Feedback can be provided at campaign or package level
- Multiple performance indices can be shared for the same period (batch submission planned for future releases)
- Optimization impact depends on the publisher’s algorithm sophistication
- Feedback is processed asynchronously; status can be checked via the response
- Historical feedback helps improve future campaign performance across the publisher’s inventory
Privacy and Data Sharing
- Performance feedback sharing is voluntary and controlled by the buyer
- Aggregate performance patterns may be used to improve overall platform performance
- Individual campaign details remain confidential to the buyer-publisher relationship
- Publishers should provide clear data usage policies in their AdCP documentation
Implementation Guide
Calculating Performance Index
Determining Metric Types
Choose metric types based on campaign objectives:Related Documentation
get_media_buy_delivery- Retrieve delivery metrics- Optimization & Reporting - Performance feedback concepts
- Targeting - Understanding targeting for optimization