Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement the ModernBert model #459

Open
wants to merge 13 commits into
base: main
Choose a base branch
from

Conversation

kozistr
Copy link
Contributor

@kozistr kozistr commented Dec 25, 2024

What does this PR do?

Close #457

  • upgrade the tokenizer crate from 0.19.1 to 0.21.0 to address a ModernBert tokenizer issue.
  • implement ModernBert model
    • it may work on CPU, CUDA (w/o FA), and MPS.
    • ModernBert uses local attention. however, I'm unfamiliar with candle_flash_attn and don't have any GPU to test FA2 w/ local attn, so the FlashModernBert implementation remains unsupported at this time.
  • implement a classification head for ModernBert

Log

$ ./target/release/text-embeddings-router --model-id ./ModernBERT-base/ --port 8888 --pooling cls --dtype float32
2024-12-25T07:34:46.753673Z  INFO text_embeddings_router: router/src/main.rs:175: Args { model_id: "./Mod*******-*ase/", revision: None, tokenization_workers: None, dtype: Some(Float32), pooling: Some(Cls), max_concurrent_requests: 512, max_batch_tokens: 16384, max_batch_requests: None, max_client_batch_size: 32, auto_truncate: false, default_prompt_name: None, default_prompt: None, hf_api_token: None, hostname: "0.0.0.0", port: 8888, uds_path: "/tmp/text-embeddings-inference-server", huggingface_hub_cache: None, payload_limit: 2000000, api_key: None, json_output: false, otlp_endpoint: None, otlp_service_name: "text-embeddings-inference.server", cors_allow_origin: None }
2024-12-25T07:34:46.817444Z  WARN text_embeddings_router: router/src/lib.rs:184: Could not find a Sentence Transformers config
2024-12-25T07:34:46.817472Z  INFO text_embeddings_router: router/src/lib.rs:188: Maximum number of tokens per request: 8192
2024-12-25T07:34:46.817622Z  INFO text_embeddings_core::tokenization: core/src/tokenization.rs:28: Starting 8 tokenization workers
2024-12-25T07:34:46.883933Z  INFO text_embeddings_router: router/src/lib.rs:230: Starting model backend
2024-12-25T07:34:46.884247Z  INFO text_embeddings_backend_candle: backends/candle/src/lib.rs:239: Starting ModernBert model on Cpu
2024-12-25T07:34:47.138974Z  WARN text_embeddings_router: router/src/lib.rs:258: Backend does not support a batch size > 4
2024-12-25T07:34:47.139002Z  WARN text_embeddings_router: router/src/lib.rs:259: forcing `max_batch_requests=4`
2024-12-25T07:34:47.139930Z  INFO text_embeddings_router::http::server: router/src/http/server.rs:1812: Starting HTTP server: 0.0.0.0:8888
2024-12-25T07:34:47.139955Z  INFO text_embeddings_router::http::server: router/src/http/server.rs:1813: Ready
2024-12-25T07:34:52.701893Z  INFO embed{total_time="115.486302ms" tokenization_time="322.4µs" queue_time="363.6µs" inference_time="114.688702ms"}: text_embeddings_router::http::server: router/src/http/server.rs:714: Success

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@OlivierDehaene OR @Narsil

candle::bail!("`splade` is not supported for ModernBert")
}

if pool == Pool::LastToken {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be implemented below, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for pointing this! I had mistakenly disabled support for LastToken pooling, even though it was already implemented. I've removed the line blocking its support, allowing LastToken pooling to be used again.

1fe761f

@michaelfeil
Copy link
Contributor

FYI, there is now https://huggingface.co/nomic-ai/modernbert-embed-base.
Let me know if you need GPU access @kozistr

@michaelfeil
Copy link
Contributor

FYI, running the nomic/modernbert-base model yields an error as the safetensors are not under model.embeddings.* but embeddings.*

@kozistr
Copy link
Contributor Author

kozistr commented Jan 6, 2025

FYI, there is now https://huggingface.co/nomic-ai/modernbert-embed-base. Let me know if you need GPU access @kozistr

thanks! I've just worked on supporting nomic-ai/modernbert-embed-base and it seems to be working well too. 3b20211

also appreciate your offer of the GPU support! currently, I'm kinda a lot on my plate so, I'll reach out later to you :) anyway, thanks again for your support

$ ./target/release/text-embeddings-router --model-id ./modernbert-embed-base --port 8888 --pooling mean --dtype float32
2025-01-06T03:09:26.039864Z  INFO text_embeddings_router: router/src/main.rs:175: Args { model_id: "./mod*******-*****-*ase", revision: None, tokenization_workers: None, dtype: Some(Float32), pooling: Some(Mean), max_concurrent_requests: 512, max_batch_tokens: 16384, max_batch_requests: None, max_client_batch_size: 32, auto_truncate: false, default_prompt_name: None, default_prompt: None, hf_api_token: None, hostname: "0.0.0.0", port: 8888, uds_path: "/tmp/text-embeddings-inference-server", huggingface_hub_cache: None, payload_limit: 2000000, api_key: None, json_output: false, otlp_endpoint: None, otlp_service_name: "text-embeddings-inference.server", cors_allow_origin: None }
2025-01-06T03:09:26.126234Z  INFO text_embeddings_router: router/src/lib.rs:188: Maximum number of tokens per request: 8192
2025-01-06T03:09:26.126419Z  INFO text_embeddings_core::tokenization: core/src/tokenization.rs:28: Starting 8 tokenization workers
2025-01-06T03:09:26.196076Z  INFO text_embeddings_router: router/src/lib.rs:230: Starting model backend
2025-01-06T03:09:26.196763Z  INFO text_embeddings_backend_candle: backends/candle/src/lib.rs:239: Starting ModernBert model on Cpu
2025-01-06T03:09:26.459153Z  WARN text_embeddings_router: router/src/lib.rs:258: Backend does not support a batch size > 4
2025-01-06T03:09:26.459182Z  WARN text_embeddings_router: router/src/lib.rs:259: forcing `max_batch_requests=4`
2025-01-06T03:09:26.460282Z  INFO text_embeddings_router::http::server: router/src/http/server.rs:1812: Starting HTTP server: 0.0.0.0:8888
2025-01-06T03:09:26.460306Z  INFO text_embeddings_router::http::server: router/src/http/server.rs:1813: Ready
2025-01-06T03:09:31.426262Z  INFO embed{total_time="121.542397ms" tokenization_time="356.3µs" queue_time="418.8µs" inference_time="120.695897ms"}: text_embeddings_router::http::server: router/src/http/server.rs:714: Success

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

support for answerdotai/ModernBERT-base
2 participants