scorebook.inference.vertex
Google Cloud Vertex AI batch inference implementation for Scorebook.
This module provides utilities for running batch inference using Google Cloud Vertex AI Gemini models, supporting large-scale asynchronous processing. It handles API communication, request formatting, response processing, and Cloud Storage operations.
responses
async def responses(
items: List[Union[
str,
List[str],
types.Content,
List[types.Content],
types.FunctionCall,
List[types.FunctionCall],
types.Part,
List[types.Part],
]],
model: str,
client: Optional[genai.Client] = None,
project_id: Optional[str] = None,
location: str = "us-central1",
system_instruction: Optional[str] = None,
**hyperparameters: Any) -> List[types.GenerateContentResponse]
Process multiple inference requests using Google Cloud Vertex AI.
This asynchronous function handles multiple inference requests, manages the API communication, and processes the responses.
Arguments:
items
- List of preprocessed items to process.model
- Gemini model ID to use (e.g., 'gemini-2.0-flash-001').client
- Optional Vertex AI client instance.project_id
- Google Cloud Project ID. If None, uses GOOGLE_CLOUD_PROJECT env var.location
- Google Cloud region (default: 'us-central1').system_instruction
- Optional system instruction to guide model behavior.hyperparameters
- Additional parameters for the requests.
Returns:
List of raw model responses.
batch
async def batch(items: List[Any],
model: str,
project_id: Optional[str] = None,
location: str = "us-central1",
input_bucket: Optional[str] = None,
output_bucket: Optional[str] = None,
**hyperparameters: Any) -> List[Any]
Process multiple inference requests in batch using Google Cloud Vertex AI.
This asynchronous function handles batch processing of inference requests, optimizing for cost and throughput using Google Cloud's batch prediction API.
Arguments:
items
- List of preprocessed items to process.model
- Gemini model ID to use (e.g., 'gemini-2.0-flash-001').project_id
- Google Cloud Project ID. If None, uses GOOGLE_CLOUD_PROJECT env var.location
- Google Cloud region (default: 'us-central1').input_bucket
- GCS bucket for input data (required).output_bucket
- GCS bucket for output data (required).hyperparameters
- Additional parameters for the batch requests.
Returns:
A list of raw model responses.