-
Notifications
You must be signed in to change notification settings - Fork 2.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add venice.ai api model provider #1008
Conversation
integrates venice api
packages/core/src/models.ts
Outdated
[ModelClass.SMALL]: "llama-3.3-70b", | ||
[ModelClass.MEDIUM]: "llama-3.3-70b", | ||
[ModelClass.LARGE]: "llama-3.1-405b", | ||
[ModelClass.IMAGE]: "fluently-xl", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe allow users to configure them through .env? E.g., https://github.com/ai16z/eliza/pull/999/files
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good call and done!
will submit new pr for this
will add back if/when this is added to the api
I didn't test the image generation and it looks like it will require more work so I removed it for now and will make a new PR to add it later. It also appears venice doesn't use frequency or presence penalty parameters (yet at least) so I removed that (for now). This PR should be good now |
@odilitime should I push the pnpm-lock.yaml back in to fix merge conflict? |
to fix conflict
@@ -479,6 +479,29 @@ export async function generateText({ | |||
break; | |||
} | |||
|
|||
case ModelProviderName.VENICE: { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can just add this as a case around line 149, as venice follow the openAI api spec.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea I tested this as well (definitely works) but I plan to add the venice_parameters that are exclusive to venice as they come out.
model: { | ||
[ModelClass.SMALL]: settings.SMALL_VENICE_MODEL || "llama-3.3-70b", | ||
[ModelClass.MEDIUM]: settings.MEDIUM_VENICE_MODEL || "llama-3.3-70b", | ||
[ModelClass.LARGE]: settings.LARGE_VENICE_MODEL || "llama-3.1-405b", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could probably add an image model here like flux-dev-unsencored
as the default with the option to control in the .env file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I originally had image generation in here but it needs more work so I figured I'd add it in a later PR. In the first commit here it's included though.
Merging this so we can stay up to date, expecting there's another PR coming for more Venice stuff. Great work on this. |
integrates venice api
Risks
Low, just an additional API option (that's really fast)
Background
Venice.ai is an artificial intelligence platform that provides access to advanced AI models, prioritizing user privacy by storing data only in the user's browser. It offers a paid version, Venice Pro, with higher limits and customization options, which is also available for free to users holding 1 Morpheus (MOR) token. The platform supports document uploads and image generation, and has a presence on Twitter, Farcaster, and Instagram.
What does this PR do?
Adds the Venice AI API for accessing its supported models (really fast)
What kind of change is this?
Feature (non-breaking change which adds functionality)
Documentation changes needed?
My changes do not require a change to the project documentation. Usage is pretty straightforward.
Testing
Where should a reviewer start?
I tested with the default characters (by changing API option to venice)
Detailed testing steps
Configure VENICE_API_KEY in .env
Change a character json to use modelProvider "venice"
Test text generation using configured model
pnpm start --characters=/path/to/chararacter
Discord username
same as github: @proteanx