Table of contents
Learn how to run a model on Replicate from within your Python code. It could be an app, a notebook, an evaluation script, or anywhere else you want to use machine learning.
We maintain an open-source Python client for the API. Install it with pip:
pip install replicate
Generate an API token at replicate.com/account/api-tokens, copy the token, then set it as an environment variable in your shell:
export REPLICATE_API_TOKEN=r8_....
You can run any public model on Replicate from your Python code. Here's an example that runs black-forest-labs/flux-schnell to generate an image:
import replicate
output = replicate.run(
"black-forest-labs/flux-schnell",
input={"prompt": "an iguana on the beach, pointillism"}
)
# Save the generated image
with open('output.png', 'wb') as f:
f.write(output[0].read())
print(f"Image saved as output.png")
Some models take files as inputs. You can use a local file on your machine as input, or you can provide an HTTPS URL to a file on the public internet.
Here's an example that uses a local file as input to the LLaVA vision model, which takes an image and a text prompt as input and responds with text:
import replicate
image = open("my_fridge.jpg", "rb")
output = replicate.run(
"yorickvp/llava-13b:a0fdc44e4f2e1f20f2bb4e27846899953ac8e66c5886c5878fa1d6b73ce009e5",
input={
"image": image,
"prompt": "Here's what's in my fridge. What can I make for dinner tonight?"
}
)
print(output)
# You have a well-stocked refrigerator filled with various fruits, vegetables, and ...
URLs are more efficient if your file is already in the cloud somewhere, or it is a large file.
Here's an example that uses an HTTPS URL of an image on the internet as input to a model:
image = "https://example.com/my_fridge.jpg"
output = replicate.run(
"yorickvp/llava-13b:a0fdc44e4f2e1f20f2bb4e27846899953ac8e66c5886c5878fa1d6b73ce009e5",
input={
"image": image,
"prompt": "Here's what's in my fridge. What can I make for dinner tonight?"
}
)
print(output)
# You have a well-stocked refrigerator filled with various fruits, vegetables, and ...
Some models stream output as the model is running. They will return an iterator, and you can iterate over that output:
iterator = replicate.run(
"mistralai/mixtral-8x7b-instruct-v0.1",
input={"prompt": "Who was Dolly the sheep?"},
)
for text in iterator:
print(text, end="")
# Dolly the sheep was the first mammal to be successfully cloned from an adult cell...
Some models generate files as output, such as images or audio. These are returned as FileOutput
objects, which you can easily save or process:
output = replicate.run(
"black-forest-labs/flux-schnell",
input={"prompt": "A majestic lion"}
)
# Save the generated image
with open('lion.png', 'wb') as f:
f.write(output[0].read())
print("Image saved as lion.png")
# Handle multiple outputs
output = replicate.run(
"black-forest-labs/flux-schnell",
input={"prompt": "A majestic lion", "num_outputs": 2}
)
for idx, file_output in enumerate(output):
with open(f'output_{idx}.png', 'wb') as f:
f.write(file_output.read())
For more details on handling output files, see Output Files.