Having trouble integrating APIs into your own platform ? Well, now we have LLMs to our rescue. Working with developers documentation can become tiresome but now you can generate OpenAPI specifications for any platform. For those who don’t know what an OpenAPI specification is, imagine it being a roadmap to an API developer documentation. These spec file (sometimes called swagger file) makes a developers job a lot easier. There is always this downside of these swagger files not being available as open source. If that’s the case you have a road block but not anymore. With the help of LLMs and some smart software development we can now generate these specification files for any SaaS platform.
Leveraging the true power of LLMs we need to use efficient prompt engineering. Working with these large language, I have observed that giving the prompt as a template generally tends to work wonderfully.
Below is a Jinja template I used for prompting:
Goal: You are an OpenAPI specification generator. You will be given contents of a developers documentation.
We are writing the swagger file only for a particular resource {{ resource_name }}.Extract only the relevant information needed from this document content here
```json
{{ document }}
```
to generate the OpenAPI specification whose format is here
```ts
{% include 'openapi_spec.ts' %}
```
Return the output in json format.
Swagger files are all about correct format and using a typescript
for getting the desired output proved to be really helpful.
type KeyValue = {
key: string;
value: string;
};type JsonPathString = string;
interface OpenAPIInfo {
description: string;
title: string;
version: string;
}
interface OpenAPIServer {
url: string;
}
interface OpenAPIParameter {
description: string;
in: string;
name: string;
required: boolean;
schema: {
type: string;
};
}
interface OpenAPIResponse {
description: string;
}
interface OpenAPIOperation {
tags: string[];
summary: string;
description: string;
operationId: string;
parameters: OpenAPIParameter[];
responses: {
[key: string]: OpenAPIResponse;
};
}
interface OpenAPIPath {
[key: string]: {
[key: string]: OpenAPIOperation;
};
}
interface OpenAPISpecification {
openapi: string;
info: OpenAPIInfo;
servers: OpenAPIServer[];
paths: OpenAPIPath;
}
Generally, GPT-4 works surprisingly well with both code generation and documents of different format. Here is an example of a code for calling OpenAI’s GPT-4. You would have to include some parts for pre-processing and using the jinja
template as a prompt.
def get_completion_with_template(
template_name: str,
context: dict,
dry_run: bool = False,
progress=None,
json_mode=False,
model=None,
) -> Tuple[str, Optional[str]]:
prompt = render_prompt(template_name, context)if dry_run:
return prompt, None
completion = get_simple_completion(
prompt,
progress=progress,
json_mode=json_mode,
model=model,
template_name=template_name,
)
return prompt, completion
Well if all dots are connected properly you will see this output from the LLM.
{
"openapi": "3.0.1",
"info": {
"description": "Miro API",
"title": "Miro API",
"version": "0.1"
},
"servers": [
{
"url": "https://api.miro.com/"
}
],
"paths": {
"/v1/oauth/revoke": {
"post": {
"tags": ["tokens"],
"summary": "Revoke token",
"description": "Revoke the current access token. Revoking an access token means that the access token will no longer work. When an access token is revoked, the refresh token is also revoked and no longer valid. This does not uninstall the application for the user.",
"operationId": "revoke-token",
"parameters": [
{
"description": "Access token that you want to revoke",
"in": "query",
"name": "access_token",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"204": {
"description": "Token revoked"
},
"400": {
"description": "Failed to revoke token"
}
}
}
}
}
}
}
References: