Productionizing Kiln

How to take ideas from the lab (Kiln) into your product

Kiln is a great place to rapidly experiment with many models, providers, fine-tunes, and prompts.

Once you've found the ideal way to run your AI workload, you are ready to create (or iterate) on your product. This guide walks through the options for creating prodcuts from Kiln projects.

Every model request sent from Kiln is logged to ~/.kiln_ai/logs/model_calls.log . This includes the messages and any parameters sent to the API (model, temperature, reasoning, etc). It even includes a curl command to replicate the call Kiln made from the command line.

You can use the data in this log to exactly replicate any Kiln request in the language/framework of your choice. You can choose which tech-stack makes sense for your product; anything from simple HTTP requests, to integrating provider libraries (pip install openai), to integrating llama.cpp into your app/process.

Option 2: [Alpha] Use the Kiln Python Library

The MIT open-source Kiln python library powers all of the requests in the Kiln app. You can use it in your app, pointing to the Kiln project of your choice. See the library docs for details.

Last updated