Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CoreML failed: Unable to get shape for output #23262

Open
thewh1teagle opened this issue Jan 6, 2025 · 0 comments
Open

CoreML failed: Unable to get shape for output #23262

thewh1teagle opened this issue Jan 6, 2025 · 0 comments
Labels
ep:CoreML issues related to CoreML execution provider

Comments

@thewh1teagle
Copy link

Describe the issue

I'm using kokoro-onnx for TTS generation with CoreML execution provider on macOS with an M1 and it failed with the following error:

2025-01-06 20:04:29.684416 [W:onnxruntime:, helper.cc:88 IsInputSupported] CoreML does not support shapes with dimension values of 0. Input:/Slice_1_output_0, shape: {0}
2025-01-06 20:04:29.684759 [W:onnxruntime:, helper.cc:88 IsInputSupported] CoreML does not support shapes with dimension values of 0. Input:/decoder/generator/m_source/l_sin_gen/Slice_output_0, shape: {0}
2025-01-06 20:04:29.685270 [W:onnxruntime:, helper.cc:82 IsInputSupported] CoreML does not support input dim > 16384. Input:decoder.generator.stft.stft.window_sum, shape: {5000015}
2025-01-06 20:04:29.686710 [W:onnxruntime:, coreml_execution_provider.cc:115 GetCapability] CoreMLExecutionProvider::GetCapability, number of partitions supported by CoreML: 123 number of nodes in the graph: 2361 number of nodes supported by CoreML: 949
Traceback (most recent call last):
  File "/Volumes/Internal/audio/kokoro-onnx/examples/with_session.py", line 14, in <module>
    session = InferenceSession("kokoro-v0_19.onnx", providers=["CoreMLExecutionProvider"])
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Volumes/Internal/audio/kokoro-onnx/.venv/lib/python3.12/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 465, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "/Volumes/Internal/audio/kokoro-onnx/.venv/lib/python3.12/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 537, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : model_builder.cc:768 RegisterModelInputOutput Unable to get shape for output: /Squeeze_output_0

Related:

To reproduce

"""
pip install kokoro-onnx==0.2.3 soundfile

wget https://github.com/thewh1teagle/kokoro-onnx/releases/download/model-files/kokoro-v0_19.onnx
wget https://github.com/thewh1teagle/kokoro-onnx/releases/download/model-files/voices.json
python examples/custom_session.py
"""

import soundfile as sf
from kokoro_onnx import Kokoro
from onnxruntime import InferenceSession

# See list of providers https://github.com/microsoft/onnxruntime/issues/22101#issuecomment-2357667377
session = InferenceSession("kokoro-v0_19.onnx", providers=["CoreMLExecutionProvider"])
kokoro = Kokoro.from_session(session, "voices.json")
samples, sample_rate = kokoro.create(
    "Hello. This audio generated by kokoro!", voice="af_sarah", speed=1.0, lang="en-us"
)
sf.write("audio.wav", samples, sample_rate)
print("Created audio.wav")

Urgency

It's too slow on CPU

Platform

Mac

OS Version

14.5 (23F79)

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

onnxruntime v1.20.1

ONNX Runtime API

Python

Architecture

X64

Execution Provider

CoreML

Execution Provider Library Version

No response

@github-actions github-actions bot added the ep:CoreML issues related to CoreML execution provider label Jan 6, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:CoreML issues related to CoreML execution provider
Projects
None yet
Development

No branches or pull requests

1 participant