Skip to main content

provide_performance_feedback

Share performance outcomes with publishers to enable data-driven optimization and improved campaign delivery. Response Time: ~5 seconds (data ingestion) Request Schema: https://adcontextprotocol.org/schemas/v1/media-buy/provide-performance-feedback-request.json Response Schema: https://adcontextprotocol.org/schemas/v1/media-buy/provide-performance-feedback-response.json

Request Parameters

ParameterTypeRequiredDescription
media_buy_idstringYesPublisher’s media buy identifier
measurement_periodobjectYesTime period for performance measurement
performance_indexnumberYesNormalized performance score (0.0 = no value, 1.0 = expected, >1.0 = above expected)
package_idstringNoSpecific package within the media buy (if feedback is package-specific)
creative_idstringNoSpecific creative asset (if feedback is creative-specific)
metric_typestringNoThe business metric being measured (defaults to “overall_performance”)
feedback_sourcestringNoSource of the performance data (defaults to “buyer_attribution”)

Response (Message)

The response includes a human-readable message that:
  • Confirms receipt of the performance feedback
  • Summarizes the performance level provided
  • Explains how the feedback will be used for optimization
  • Provides next steps or recommendations
The message is returned differently in each protocol:
  • MCP: Returned as a message field in the JSON response
  • A2A: Returned as a text part in the artifact

Response (Payload)

{
  "success": "boolean",
  "message": "string"
}

Field Descriptions

  • success: Whether the performance feedback was successfully received
  • message: Optional human-readable message about the feedback processing

Protocol-Specific Examples

The AdCP payload is identical across protocols. Only the request/response wrapper differs.

MCP Request

{
  "tool": "provide_performance_feedback",
  "arguments": {
    "media_buy_id": "gam_1234567890",
    "measurement_period": {
      "start": "2024-01-15T00:00:00Z",
      "end": "2024-01-21T23:59:59Z"
    },
    "performance_index": 1.35,
    "metric_type": "conversion_rate"
  }
}

MCP Response

{
  "message": "Performance feedback received for campaign gam_1234567890. The 35% above-expected conversion rate will be used to optimize future delivery. Next optimization cycle runs tonight at midnight UTC.",
  "success": true
}

A2A Request

Natural Language Invocation

await a2a.send({
  message: {
    parts: [{
      kind: "text",
      text: "The campaign gam_1234567890 had a conversion rate 35% above expectations for the week of January 15-21. Please use this to optimize future delivery."
    }]
  }
});

Explicit Skill Invocation

await a2a.send({
  message: {
    parts: [{
      kind: "data",
      data: {
        skill: "provide_performance_feedback",
        parameters: {
          media_buy_id: "gam_1234567890",
          measurement_period: {
            start: "2024-01-15T00:00:00Z",
            end: "2024-01-21T23:59:59Z"
          },
          performance_index: 1.35,
          metric_type: "conversion_rate"
        }
      }
    }]
  }
});

A2A Response

A2A returns results as artifacts:
{
  "artifacts": [{
    "name": "performance_feedback_confirmation",
    "parts": [
      {
        "kind": "text",
        "text": "Performance feedback received for campaign gam_1234567890. The 35% above-expected conversion rate will be used to optimize future delivery. Next optimization cycle runs tonight at midnight UTC."
      },
      {
        "kind": "data", 
        "data": {
          "success": true
        }
      }
    ]
  }]
}

Key Differences

  • MCP: Direct tool call with arguments, returns flat JSON response
  • A2A: Skill invocation with input, returns artifacts with text and data parts
  • Payload: The input field in A2A contains the exact same structure as MCP’s arguments

Scenarios

Example 1: Campaign-Level Performance Feedback

Request

{
  "media_buy_id": "gam_1234567890",
  "measurement_period": {
    "start": "2024-01-01T00:00:00Z",
    "end": "2024-01-31T23:59:59Z"
  },
  "performance_index": 0.85,
  "metric_type": "brand_lift",
  "feedback_source": "third_party_measurement"
}

Response - Below Expected Performance

Message: “Performance feedback received for campaign gam_1234567890. The 15% below-expected brand lift suggests targeting refinement is needed. Our optimization algorithms will reduce spend on underperforming segments starting with the next cycle.” Payload:
{
  "success": true,
  "message": "Performance feedback processed successfully. Optimization algorithms updated."
}

Example 2: Package-Specific Performance Feedback

Request

{
  "media_buy_id": "meta_9876543210",
  "package_id": "pkg_social_feed",
  "measurement_period": {
    "start": "2024-02-01T00:00:00Z",
    "end": "2024-02-07T23:59:59Z"
  },
  "performance_index": 2.1,
  "metric_type": "click_through_rate",
  "feedback_source": "buyer_attribution"
}

Response - Exceptional Performance

Message: “Outstanding performance feedback for package pkg_social_feed! The 110% above-expected click-through rate indicates this audience segment is highly engaged. We’ll increase allocation to similar inventory and audiences.” Payload:
{
  "success": true,
  "message": "Exceptional performance noted. Increasing allocation to similar segments."
}

Example 3: Creative-Specific Performance Feedback

Request

{
  "media_buy_id": "ttd_5555555555",
  "creative_id": "creative_video_123",
  "measurement_period": {
    "start": "2024-02-01T00:00:00Z",
    "end": "2024-02-07T23:59:59Z"
  },
  "performance_index": 0.65,
  "metric_type": "completion_rate",
  "feedback_source": "verification_partner"
}

Response - Poor Creative Performance

Message: “Creative creative_video_123 shows 35% below-expected completion rate. Consider creative refresh or A/B testing alternative versions.” Payload:
{
  "success": true,
  "message": "Creative performance feedback recorded. Consider creative optimization."
}

Example 4: Multiple Performance Metrics (Future)

Request - Batch Feedback (Not Implemented Yet)

{
  "media_buy_id": "ttd_5555555555",
  "measurement_period": {
    "start": "2024-02-01T00:00:00Z",
    "end": "2024-02-14T23:59:59Z"
  },
  "feedback_metrics": [
    {
      "metric_type": "viewability",
      "performance_index": 1.15
    },
    {
      "metric_type": "completion_rate",
      "performance_index": 0.92
    },
    {
      "metric_type": "brand_safety",
      "performance_index": 1.05
    }
  ],
  "feedback_source": "verification_partner"
}

Performance Index Scale

The performance index provides a normalized way to communicate business outcomes:
  • 0.0: No measurable value or impact
  • 0.5: Significantly below expectations (-50%)
  • 1.0: Meets baseline expectations (0% variance)
  • 1.5: Exceeds expectations by 50%
  • 2.0+: Exceptional performance (100%+ above expected)

Common Metric Types

  • overall_performance: General campaign success (default)
  • conversion_rate: Post-click or post-view conversions
  • brand_lift: Brand awareness or consideration lift
  • click_through_rate: Engagement with creative
  • completion_rate: Video or audio completion rates
  • viewability: Viewable impression rate
  • brand_safety: Brand safety compliance
  • cost_efficiency: Cost per desired outcome

Feedback Sources

  • buyer_attribution: Buyer’s own measurement and attribution
  • third_party_measurement: Independent measurement partner
  • platform_analytics: Publisher platform’s analytics
  • verification_partner: Third-party verification service

How Publishers Use Performance Feedback

Publishers leverage performance indices to:
  1. Optimize Targeting: Shift impressions to high-performing segments and audiences
  2. Improve Inventory: Identify and prioritize high-value placements
  3. Adjust Pricing: Update CPMs based on proven value delivery
  4. Enhance Algorithms: Train machine learning models on actual business outcomes
  5. Product Development: Refine product definitions based on performance patterns

Usage Notes

  • Performance feedback is optional but highly valuable for optimization
  • Feedback can be provided at campaign or package level
  • Multiple performance indices can be shared for the same period (batch submission planned for future releases)
  • Optimization impact depends on the publisher’s algorithm sophistication
  • Feedback is processed asynchronously; status can be checked via the response
  • Historical feedback helps improve future campaign performance across the publisher’s inventory

Privacy and Data Sharing

  • Performance feedback sharing is voluntary and controlled by the buyer
  • Aggregate performance patterns may be used to improve overall platform performance
  • Individual campaign details remain confidential to the buyer-publisher relationship
  • Publishers should provide clear data usage policies in their AdCP documentation

Implementation Guide

Calculating Performance Index

def calculate_performance_index(actual_metric, expected_metric):
    """
    Calculate normalized performance index
    
    Args:
        actual_metric: Measured performance value
        expected_metric: Baseline or expected performance value
        
    Returns:
        Performance index (0.0 = no value, 1.0 = expected, >1.0 = above expected)
    """
    if expected_metric == 0:
        return 0.0
        
    return actual_metric / expected_metric

# Examples:
# CTR: 0.15% actual vs 0.12% expected = 1.25 performance index (25% above)
# Conversions: 45 actual vs 60 expected = 0.75 performance index (25% below)
# Brand lift: 8% actual vs 5% expected = 1.6 performance index (60% above)

Determining Metric Types

Choose metric types based on campaign objectives:
METRIC_TYPE_MAPPING = {
    'awareness': 'brand_lift',
    'consideration': 'brand_lift', 
    'traffic': 'click_through_rate',
    'conversions': 'conversion_rate',
    'sales': 'conversion_rate',
    'engagement': 'completion_rate',
    'reach': 'overall_performance'
}

def get_metric_type(campaign_objective):
    return METRIC_TYPE_MAPPING.get(campaign_objective, 'overall_performance')