_ _ _
(_) | | | |
_ _ __ ___ _ __ ___| | __ _ __| |
| | '__/ _ \| '_ \ / __| |/ _` |/ _` |
| | | | (_) | | | | (__| | (_| | (_| |
|_|_| \___/|_| |_|\___|_|\__,_|\__,_|
A CLI tool takes an OpenAPI spec and generates files needed for function calling.
OpenAPI schemas include the required information in a format that needs to be normalized and transformed into a format that can be used for function calling. This tool takes the OpenAPI schema and generates the files needed for function calling.
Without Ironclad, you would have to manually write the function definitions and the function calling logic. Ironclad automates this process.
It was created when I was building a chatbot that uses Spotify API and OpenAI API. I had to manually write the function definitions and the function calling logic. I realized that I could automate this process by using the OpenAPI schema.
The following files are generated inside the generated
directory:
types.ts
- Includes type definitions for the OpenAPI schema, generated byopenapi-typescript
.functions.ts
- Includesh function definitions used for Function Calling.runFunction.ts
- Includes a function that calls the functions defined infunctions.ts
. It is responsible for executing the function calls.
Example files are located in the example
directory.
- OpenAI API Key
OpenAPI schemas include the required information in a format that needs to be normalized and transformed into a format that can be used for function calling. This tool takes the OpenAPI schema and generates the files needed for function calling.
- Call the
ironclad
command in the root directory.
$ ironclad
- The
generated
directory will be created with the files mentioned above.
import { functions } from 'generated/functions';
import { runFunction } from 'generated/runFunction';
- Use the
functions
andrunFunction
in your code.
Import the functions
in your code.
import { functions } from 'generated/functions';
functions
are passed as an argument to the OpenAI endpoint.
const response = await openai.chat.completions.create({
model: 'gpt-3.5-turbo-16k',
stream: true,
messages,
functions: functions,
});
Import the runFunction
in your code.
import { runFunction } from 'generated/runFunction';
runFunction
is used to call the functions.
const stream = OpenAIStream(response, {
experimental_onFunctionCall: async (
{ name, arguments: args },
createFunctionCallMessages,
) => {
const result = await runFunction(name, args);
// .... rest of the code
},
});
Currently, the CLI tool is not published to npm. You can run the CLI tool by following the steps below:
- Clone the repository.
$ git clone https://github.com/mislavjc/ironclad.git
- Install the dependencies.
$ pnpm install
- Run the CLI tool.
$ npm run exec