Generated llama-deploy API client from OpenAPI specification
npm install @llamaindex/llama-deployThis package provides a TypeScript llama-deploy API client generated from the OpenAPI specification using @hey-api/openapi-ts.
``bash`
npm install @llamaindex/llama-deploy
To generate the llama-deploy API client from the OpenAPI specification:
`bash`
npm run generate
This will read the OpenAPI JSON file from ../chat-ui/src/hook/openapi.json and generate TypeScript client code in src/generated/.
To build the package:
`bash`
npm run build
This will run the generator and compile TypeScript to the dist/ directory.
To clean generated files:
`bash`
npm run clean
`typescript
import { client, DeploymentsService } from '@llamaindex/llama-deploy'
// Configure the client
client.setConfig({
baseUrl: 'https://your-api-base-url.com',
})
// Use the API services
const deployments = await DeploymentsService.readDeploymentsDeploymentsGet()
`
The generated client includes services for:
- DeploymentsService: Manage deployments
- TasksService: Create and manage tasks
- SessionsService: Handle sessions
- EventsService: Stream and send events
The following files are generated and should not be edited manually:
- src/generated/client.ts - HTTP client configurationsrc/generated/services.ts
- - API service methodssrc/generated/types.ts
- - TypeScript type definitionssrc/generated/index.ts
- - Main exports
The generation is configured in openapi-ts.config.ts. Key settings:
- Input: ../chat-ui/src/hook/openapi.json./src/generated
- Output: @hey-api/client-fetch`
- Client:
- Format: Prettier formatting applied