Prerequisites
Tuner Active Account
Configured Agent with provider “Custom API” in Tuner
Python 3.10–3.13 (3.14 is not yet supported due to pipecat dependencies)
Python project running pipecat-ai v0.0.105 or later
Overview
The tuner-pipecat-sdk is a lightweight observer that captures flow transitions, latency signals, transcript segments, and usage metadata from your Pipecat pipeline, then sends a structured CallPayload to Tuner when the call ends — no manual API calls required.
The SDK ships two observer types depending on how your pipeline is structured:
Observer When to use ObserverPlain pipecat-ai pipelines using LLMContext directly FlowsObserverPipelines managed by pipecat-flows and a FlowManager
Install the SDK
Add the package to your project with pip.
Set Your Credentials
Configure your Tuner API key, workspace ID, and agent ID.
Add the Observer
Create the correct observer for your pipeline type, attach it, and place it after TTS.
Estimated time: 2 minutes from start to finish
Step 1: Install the SDK
For plain pipecat-ai pipelines:
pip install tuner-pipecat-sdk
For pipecat-flows pipelines, install with the flows extra:
pip install tuner-pipecat-sdk[flows]
Requirements: Python 3.10–3.13, pipecat-ai>=0.0.105. Do not use Python 3.14 — pipecat dependencies (onnxruntime, numba) do not yet have 3.14 wheels.
Step 2: Set Your Credentials
You can configure credentials via environment variables or pass them directly in code.
Environment Variables
Inline in Code
export TUNER_API_KEY = "tr_api_..."
export TUNER_WORKSPACE_ID = "123"
export TUNER_AGENT_ID = "my-agent"
Variable Required Description TUNER_API_KEY✅ Bearer token (starts with tr_api_) TUNER_WORKSPACE_ID✅ Your Tuner workspace ID TUNER_AGENT_ID✅ Agent Remote identifier from Agent Settings TUNER_BASE_URL— API base URL (default: https://api.usetuner.ai)
Observer(
api_key = "tr_api_..." ,
workspace_id = 123 ,
agent_id = "my-agent" ,
call_id = str (uuid.uuid4()),
base_url = "https://api.usetuner.ai" ,
)
The Agent ID must match the identifier configured in Tuner. Find it under Agent Settings > Agent Connection > Agent Remote ID .
Step 3: Add the Observer
Choose the observer that matches your pipeline type.
Plain pipecat-ai
pipecat-flows
Use Observer when your pipeline manages context directly via LLMContext. import uuid
from tuner_pipecat_sdk import Observer
from pipecat.observers.turn_tracking_observer import TurnTrackingObserver
turn_tracker = TurnTrackingObserver()
observer = Observer(
api_key = "YOUR_TUNER_API_KEY" ,
workspace_id = 42 ,
agent_id = "my-agent" ,
call_id = str (uuid.uuid4()),
base_url = "https://api.usetuner.ai" ,
asr_model = "deepgram/nova-3" ,
llm_model = "gpt-4o-mini" ,
tts_model = "cartesia/sonic" ,
)
# Required: attach the LLM context before running the pipeline
observer.attach_context(context)
observer.attach_turn_tracking_observer(turn_tracker)
Use FlowsObserver when your pipeline is managed by pipecat-flows and a FlowManager. import uuid
from tuner_pipecat_sdk import FlowsObserver
from pipecat.observers.turn_tracking_observer import TurnTrackingObserver
turn_tracker = TurnTrackingObserver()
observer = FlowsObserver(
api_key = "YOUR_TUNER_API_KEY" ,
workspace_id = 42 ,
agent_id = "my-agent" ,
call_id = str (uuid.uuid4()),
base_url = "https://api.usetuner.ai" ,
asr_model = "deepgram/nova-3" ,
llm_model = "gpt-4o-mini" ,
tts_model = "cartesia/sonic" ,
)
# Required: attach before running the pipeline
observer.attach_flow_manager(flow_manager)
observer.attach_turn_tracking_observer(turn_tracker)
Place the observer after TTS in your pipeline (same for both observer types):
Pipeline([
transport.input(),
stt,
context_aggregator.user(),
llm,
tts,
observer,
transport.output(),
context_aggregator.assistant(),
])
Enable metrics on the pipeline task so latency and usage fields are populated:
from pipecat.pipeline.task import PipelineTask
from pipecat.pipeline.pipeline_params import PipelineParams
task = PipelineTask(
pipeline,
params = PipelineParams(
observers = [observer.latency_observer, turn_tracker],
enable_metrics = True ,
enable_usage_metrics = True ,
),
)
Without enable_metrics and enable_usage_metrics, the observer will log warnings and LLM/TTS metric fields will be absent from the payload.
For more examples, see the tuner-pipecat-sdk-python examples .
Configuration Options
Override the default call type label: Observer( ... , call_type = "web_call" )
Observer( ... , call_type = "phone_call" )
Provide a recording URL if available. Default is "pipecat://no-recording": Observer( ... , recording_url = "https://cdn.example.com/recordings/call-123.ogg" )
Record why a call ended by passing a disconnection_reason_resolver callable. The resolver is called at flush time and should return a string or None. Use the built-in constants from DisconnectReason: Constant Value DisconnectReason.USER_HANGUP"user_hangup"DisconnectReason.AGENT_HANGUP"agent_hangup"DisconnectReason.ERROR"error"DisconnectReason.TIMEOUT"timeout"DisconnectReason.UNKNOWN"unknown"
from tuner_pipecat_sdk.models import DisconnectReason
_reason = None
def resolve_reason () -> str | None :
return _reason
observer = Observer( ... , disconnection_reason_resolver = resolve_reason)
# Set the reason when your app knows it
_reason = DisconnectReason. USER_HANGUP
Works the same way on both Observer and FlowsObserver.
Specify your ASR, LLM, and TTS models for metadata: Observer(
... ,
asr_model = "deepgram/nova-3" ,
llm_model = "gpt-4o-mini" ,
tts_model = "cartesia/sonic" ,
)
Log the full transcript when flushing: Observer( ... , debug = True )
Full Examples
Plain pipecat-ai
pipecat-flows
import os
import uuid
from tuner_pipecat_sdk import Observer
from tuner_pipecat_sdk.models import DisconnectReason
from pipecat.observers.turn_tracking_observer import TurnTrackingObserver
from pipecat.pipeline.task import PipelineTask
from pipecat.pipeline.pipeline_params import PipelineParams
turn_tracker = TurnTrackingObserver()
_reason = None
observer = Observer(
api_key = os.environ[ "TUNER_API_KEY" ],
workspace_id = int (os.environ[ "TUNER_WORKSPACE_ID" ]),
agent_id = "my-agent" ,
call_id = str (uuid.uuid4()),
base_url = "https://api.usetuner.ai" ,
call_type = "web_call" ,
asr_model = "deepgram/nova-3" ,
llm_model = "gpt-4o-mini" ,
tts_model = "cartesia/sonic" ,
disconnection_reason_resolver = lambda : _reason,
)
observer.attach_context(context)
observer.attach_turn_tracking_observer(turn_tracker)
pipeline = Pipeline([
transport.input(),
stt,
context_aggregator.user(),
llm,
tts,
observer,
transport.output(),
context_aggregator.assistant(),
])
task = PipelineTask(
pipeline,
params = PipelineParams(
observers = [observer.latency_observer, turn_tracker],
enable_metrics = True ,
enable_usage_metrics = True ,
),
)
# Set the reason when your app knows it
_reason = DisconnectReason. USER_HANGUP
import os
import uuid
from tuner_pipecat_sdk import FlowsObserver
from tuner_pipecat_sdk.models import DisconnectReason
from pipecat.observers.turn_tracking_observer import TurnTrackingObserver
from pipecat.pipeline.task import PipelineTask
from pipecat.pipeline.pipeline_params import PipelineParams
turn_tracker = TurnTrackingObserver()
_reason = None
observer = FlowsObserver(
api_key = os.environ[ "TUNER_API_KEY" ],
workspace_id = int (os.environ[ "TUNER_WORKSPACE_ID" ]),
agent_id = "my-agent" ,
call_id = str (uuid.uuid4()),
base_url = "https://api.usetuner.ai" ,
call_type = "web_call" ,
asr_model = "deepgram/nova-3" ,
llm_model = "gpt-4o-mini" ,
tts_model = "cartesia/sonic" ,
disconnection_reason_resolver = lambda : _reason,
)
observer.attach_flow_manager(flow_manager)
observer.attach_turn_tracking_observer(turn_tracker)
pipeline = Pipeline([
transport.input(),
stt,
context_aggregator.user(),
llm,
tts,
observer,
transport.output(),
context_aggregator.assistant(),
])
task = PipelineTask(
pipeline,
params = PipelineParams(
observers = [observer.latency_observer, turn_tracker],
enable_metrics = True ,
enable_usage_metrics = True ,
),
)
# Set the reason when your app knows it
_reason = DisconnectReason. USER_HANGUP
Troubleshooting
No matching distribution found for onnxruntime
Python 3.14 : Pipecat pins onnxruntime versions that have no 3.14 wheels. Switch to Python 3.12 or 3.13 and create a new venv.
Failed to build numba / Cannot install on Python 3.14
Same as above: use Python 3.12 or 3.13 .
Calls don't appear in Tuner
Verify that agent_id matches the Agent Remote ID configured in Tuner under Agent Settings > Agent Connection .
Confirm workspace_id is correct.
Ensure enable_metrics=True and enable_usage_metrics=True are set on PipelineParams.
Check your application logs for any error messages from the observer.
Ensure TUNER_API_KEY starts with tr_api_ and is valid.
Confirm the API key belongs to the correct workspace.
What’s Next?
Configuring Your Agent Set up call outcomes, user intents, and behavior checks.
Custom Integration Learn about the underlying API if you need more control.
Classifying Calls Define how Tuner categorizes your calls.
Real-Time Alerts Get notified when issues are detected.