sharing and serving your model

training-runs Mar 19, 2026 2 min read

:::caution This feature is experimental! :::

after training your model, you’ll want to deploy it for use or share it with your team. you can do it directly via the training run platform ui or through the openai client.

download weights

you can also download model weights from the training run page when available for your run configuration.

  • open your training run in the castform app
  • go to the artifacts or model output section
  • download the exported checkpoint/weights package

use downloaded weights for your own hosting workflows, offline evals, or further finetuning pipelines.

platform ui

  • head to your training run on castform
  • go to the playground tab

openai client

  • copy the id from the rightmost button on the training run page
  • call the openai client with base_url=https://app.castform.com/inference and model_id set to the training run id you copied.
from openai import OpenAI

client = OpenAI(
    base_url="https://app.castform.com/inference",
    api_key="your-api-key"
)

# use your fine-tuned model
response = client.chat.completions.create(
    model="your-training-run-id-here",  # paste the training run id you copied
    messages=[
        {"role": "user", "content": "your prompt here"}
    ]
)
print(response.choices[0].message.content)

next steps