Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

openai truncation error #134

Open
jb17q opened this issue Nov 4, 2024 · 4 comments
Open

openai truncation error #134

jb17q opened this issue Nov 4, 2024 · 4 comments

Comments

@jb17q
Copy link
Contributor

jb17q commented Nov 4, 2024

Hi! I am running the example multimodal agent locally. When I connect to it from the livekit playground or from my local frontend, it throws the following truncation error.

pnpm dev

> [email protected] dev
> tsc && node dist/agent.js dev

[16:36:11.501] INFO (34126): starting worker
    version: "0.1.0"
[16:36:11.525] INFO (34126): Server is listening on port 62867
[16:36:11.657] INFO (34126): registered worker
    version: "0.1.0"
    id: "AW_nWxXzroooSma"
    server_info: {
      "edition": "Cloud",
      "version": "1.8.0",
      "protocol": 15,
      "region": "US West",
      "nodeId": "NC_OPHOENIX1A_byfAyTUVQAMp",
      "debugInfo": "",
      "agentProtocol": 0
    }
[16:36:22.573] INFO (34126): received job request
    version: "0.1.0"
    job: {
      "id": "AJ_SmkzZrus2hCH",
      "type": "JT_ROOM",
      "room": {
        "sid": "RM_yJDb3wZQSvDs",
        "name": "roomName",
        "emptyTimeout": 300,
        "maxParticipants": 0,
        "creationTime": "1730680490",
        "turnPassword": "",
        "enabledCodecs": [
          {
            "mime": "video/H264",
            "fmtpLine": ""
          },
          {
            "mime": "video/VP8",
            "fmtpLine": ""
          },
          {
            "mime": "video/VP9",
            "fmtpLine": ""
          },
          {
            "mime": "video/AV1",
            "fmtpLine": ""
          },
          {
            "mime": "audio/red",
            "fmtpLine": ""
          },
          {
            "mime": "audio/opus",
            "fmtpLine": ""
          }
        ],
        "metadata": "",
        "numParticipants": 0,
        "activeRecording": false,
        "numPublishers": 0,
        "version": {
          "unixMicro": "1730680490306295",
          "ticks": 0
        },
        "departureTimeout": 20
      },
      "namespace": "",
      "metadata": "",
      "agentName": "",
      "state": {
        "status": "JS_RUNNING",
        "error": "",
        "startedAt": "1730680582622198028",
        "endedAt": "0",
        "updatedAt": "1730680582622198028",
        "participantIdentity": ""
      },
      "dispatchId": ""
    }
    resuming: false
    agentName: ""
waiting for participant
starting assistant example agent for voice_assistant_user_5478
Connecting to OpenAI Realtime API at  wss://api.openai.com/v1/realtime?model=gpt-4o-realtime-preview-2024-10-01
[16:36:57.547] ERROR (34127): OpenAI Realtime error {"type":"invalid_request_error","code":null,"message":"Only model output audio messages can be truncated","param":null,"event_id":null}
    sessionId: "sess_APfMuVBtVJ4LcIOQwO0vm"
@Y24-cloud
Copy link

Hi, have you managed to fix it yet?

@BrandiATMuhkuh
Copy link

I'm getting the same error from time to time. It seem like it has to do with the agent still speaking will a new message is sent (or so)

@tarikozket
Copy link

I get this error when the instructions include a multiline string. joining them as a single line string instead of trying to escape the newline character worked for me:

      model: new openai.realtime.RealtimeModel({
        instructions: [
            "...",
            "..."
        ].join(" "),
        voice: "alloy",
        ...

@heymartinadams
Copy link

Definitely been running into this error as well, and have been unable to use LiveKit in production for this reason so far.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants