Tenum Help

AI Handlers (aih)

The AI Handler Artifact (aih)

Filetype:{filename}.aih.lua

An AI Handler is a TENUM® artifact where you define a prompt (or set of prompts) for large language models (LLMs), along with a strict output schema.

TENUM® uses this schema to validate and parse the model’s response—returning it to you as a standard Lua table. By abstracting away lower-level API calls, AI handlers simplify integrating AI-driven features into your app, whether it’s generating text, summarizing data, or even creating random titles for a to-do list.

Usage Sample

Below is a simple AI Handler that creates a random ToDo for a ToDo list:

Sample App: **TENUM®** Blog App

Sample AI Handler: createSummary.aih.lua

local export = {} export.taskDescription = "Create a random task title for a todo app." export.config = { model = "gpt-4o", temperature = 0.8 } export.outputSchema = { type = "object", description = "todo task callback", properties = { title = { type = "string", description = "title of the task" } }, required = { "title" } } return export
  • taskDescription: The prompt or instructions sent to the AI model.

  • config: AI model settings like temperature, model name, etc.

  • outputSchema: Defines the expected structure of the result—TENUM® validates the LLM output against this schema.

To call this AI handler from, say, a UI element or message handler:

+ Button{ text = "AI", onClick = function() local ai = MessageDispatcher.dispatch{ metaMessage = "todo.api.createAiTask", message = {} } todo.commands.create{ title = ai.result.title, listId = props.listId } end, }

Best Practices

  1. Use Output Schemas

    • Use a strict outputSchema to ensure your AI responses conform to the format your app needs. This also helps you catch unexpected or malformed AI output early.

  2. Keep AI Logic Separate

    • Avoid putting extensive non-AI business logic in the AI handler. Instead, delegate complex operations to modules or entities, keeping your AI handler’s focus on the prompt and response structure.

  3. Handle Errors Gracefully

    • AI requests may time out or produce partial responses. Implement fallbacks or error handling in your calling artifacts (e.g., message handlers, UI) to handle these cases.

  4. Test Scenarios Thoroughly

    • Because LLM outputs can vary, test your AI handler with diverse prompts and edge cases. Write TENUM® spec files to automate these tests when possible.

  5. Monitor Usage

    • Be mindful of usage limits and performance implications, especially if your app depends on large or frequent AI calls. Leverage TENUM® ’s logging or external monitoring to track costs and response times.

By encapsulating your LLM requests in AI Handlers, you separate “how we talk to the AI” from the rest of your codebase. This clarifies development, testing, and prompt iteration—letting you adjust the AI experience without rewriting the rest of your application logic.

Last modified: 28 April 2025