Skip to main content

Async Operations

The SDK provides an AsyncCredoAI client for applications that use async/await. This page covers how to use the async client effectively.

When to Use Async

Use the async client when:

  • Building async web applications (FastAPI, Starlette)
  • Making concurrent API calls
  • Working in an async context (e.g., async task queues)
  • You need non-blocking I/O

Basic Usage

import asyncio
from credoai import AsyncCredoAI

async def main():
async with AsyncCredoAI() as client:
# All operations use await
response = await client.use_cases.list()
for uc in response.items:
print(f" - {uc.name}")

asyncio.run(main())

Context Manager

Always use the async client as a context manager to ensure proper cleanup:

async with AsyncCredoAI() as client:
# Client is ready
await client.use_cases.list()
# Client is automatically closed

Manual Lifecycle

If you can't use a context manager:

client = AsyncCredoAI()
try:
await client.use_cases.list()
finally:
await client.close()

Async Iteration

Use async for with list_all():

async with AsyncCredoAI() as client:
async for use_case in client.use_cases.list_all(page_size=50):
print(f" - {use_case.name}")

Collecting Results

async with AsyncCredoAI() as client:
# Async list comprehension
all_use_cases = [uc async for uc in client.use_cases.list_all()]
print(f"Total: {len(all_use_cases)}")

Concurrent Operations

Make multiple API calls concurrently with asyncio.gather():

import asyncio
from credoai import AsyncCredoAI

async def main():
async with AsyncCredoAI() as client:
# Fetch use cases, models, and vendors concurrently
use_cases, models, vendors = await asyncio.gather(
client.use_cases.list(page_limit=10),
client.models.list(page_limit=10),
client.vendors.list(page_limit=10)
)

print(f"Use cases: {len(use_cases.items)}")
print(f"Models: {len(models.items)}")
print(f"Vendors: {len(vendors.items)}")

asyncio.run(main())

Concurrent Processing

Process multiple items concurrently:

import asyncio
from credoai import AsyncCredoAI

async def process_use_case(client, use_case_id):
"""Process a single use case."""
use_case = await client.use_cases.get(use_case_id=use_case_id)
models = await client.use_case_models.list(use_case_id=use_case_id)
return {
"name": use_case.name,
"model_count": len(models.items)
}

async def main():
use_case_ids = ["uc_1", "uc_2", "uc_3"]

async with AsyncCredoAI() as client:
# Process all use cases concurrently
results = await asyncio.gather(
*[process_use_case(client, uc_id) for uc_id in use_case_ids]
)

for result in results:
print(f"{result['name']}: {result['model_count']} models")

asyncio.run(main())

Semaphore for Rate Limiting

Limit concurrent requests to avoid rate limiting:

import asyncio
from credoai import AsyncCredoAI

async def fetch_with_semaphore(client, semaphore, use_case_id):
async with semaphore:
return await client.use_cases.get(use_case_id=use_case_id)

async def main():
use_case_ids = ["uc_1", "uc_2", "uc_3", "uc_4", "uc_5"]

# Limit to 3 concurrent requests
semaphore = asyncio.Semaphore(3)

async with AsyncCredoAI() as client:
tasks = [
fetch_with_semaphore(client, semaphore, uc_id)
for uc_id in use_case_ids
]
results = await asyncio.gather(*tasks)

asyncio.run(main())

FastAPI Integration

Use the async client in FastAPI endpoints:

from fastapi import FastAPI, HTTPException
from credoai import AsyncCredoAI

app = FastAPI()

@app.get("/use-cases")
async def list_use_cases():
async with AsyncCredoAI() as client:
response = await client.use_cases.list(page_limit=50)
return {"items": [uc.model_dump() for uc in response.items]}

@app.get("/use-cases/{use_case_id}")
async def get_use_case(use_case_id: str):
async with AsyncCredoAI() as client:
try:
use_case = await client.use_cases.get(use_case_id=use_case_id)
return use_case.model_dump()
except Exception as e:
raise HTTPException(status_code=404, detail="Use case not found")

Shared Client Instance

For better performance, share a client instance:

from contextlib import asynccontextmanager
from fastapi import FastAPI
from credoai import AsyncCredoAI

@asynccontextmanager
async def lifespan(app: FastAPI):
# Startup: create client
app.state.credoai = AsyncCredoAI()
yield
# Shutdown: close client
await app.state.credoai.close()

app = FastAPI(lifespan=lifespan)

@app.get("/use-cases")
async def list_use_cases():
client = app.state.credoai
response = await client.use_cases.list(page_limit=50)
return {"items": [uc.model_dump() for uc in response.items]}

Error Handling

Error handling works the same as the sync client:

from credoai import AsyncCredoAI
from credoai.errors import ApiError

async def main():
async with AsyncCredoAI() as client:
try:
use_case = await client.use_cases.get(use_case_id="invalid-id")
except ApiError as e:
print(f"Error {e.status_code}: {e.message}")

asyncio.run(main())

Comparison: Sync vs Async

FeatureSync (CredoAI)Async (AsyncCredoAI)
Importfrom credoai import CredoAIfrom credoai import AsyncCredoAI
Method callsclient.use_cases.list()await client.use_cases.list()
Iterationfor uc in client.use_cases.list_all():async for uc in client.use_cases.list_all():
Context managerwith CredoAI() as client: (optional)async with AsyncCredoAI() as client: (recommended)
Concurrent callsSequentialasyncio.gather()

Next Steps