AI Integration Guide

This guide provides comprehensive documentation for the AI subsystem in WooAI Chatbot Pro. It covers the multi-provider architecture, tool system, prompt engineering, and extension patterns for senior developers integrating or extending the AI capabilities.

Known Issues & Limitations

Before you dive in, here's what we're still working on:

Issue Status Workaround
Gemini sometimes returns empty content Investigating We retry 3x, usually works
Claude streaming not implemented Planned for 0.3.0 Use non-streaming mode
OpenAI function_call deprecated TODO: migrate to tools API Still works, just warnings in logs
Token counting inaccurate for non-English Won't fix (tiktoken limitation) Estimate +20% for Cyrillic

Real talk: We started with OpenAI only, then bolted on Claude and Gemini. The abstraction layer works but isn't perfect. PRs welcome if you want to clean it up.


1. Architecture Overview

The AI subsystem follows a multi-provider orchestration pattern with intelligent fallback, enabling seamless switching between AI providers while maintaining consistent API contracts.

Core Components

AIOrchestrator (Singleton)
    |
    +-- Providers (AIProviderInterface)
    |       +-- GeminiProvider
    |       +-- OpenAIProvider
    |       +-- ClaudeProvider
    |
    +-- ToolRegistry (Singleton)
    |       +-- SearchProductsTool
    |       +-- AddToCartTool
    |       +-- CreateCheckoutTool
    |
    +-- PromptBuilder

Design Principles

  1. Provider Abstraction: All providers implement AIProviderInterface, enabling polymorphic behavior
  2. Singleton Orchestration: AIOrchestrator::instance() ensures single point of provider management
  3. Fallback Chain: Configurable priority with automatic failover on errors or rate limits
  4. Tool Extensibility: Plugin-based tool registration via ToolRegistry

Data Flow

User Message
    |
    v
MessageHandler --> PromptBuilder (Context Assembly)
    |
    v
AIOrchestrator --> Provider Selection (Priority Order)
    |
    v
Provider API Call --> Response Parsing
    |
    v
ToolRegistry --> Tool Execution (if function_call detected)
    |
    v
Response to Frontend

2. AIProviderInterface

All AI providers must implement the AIProviderInterface contract defined in /includes/AI/AIProviderInterface.php.

Interface Contract

namespace WooAIChatbot\AI;

interface AIProviderInterface {
    /**
     * Generate AI response from conversation messages.
     *
     * @param array $messages Array of message objects with 'role' and 'content'.
     * @param array $context  Additional context (products, cart, tools, etc.).
     * @return array|WP_Error Response with 'content', 'tokens', 'model'.
     */
    public function generate_response( $messages, $context = array() );

    /**
     * Format messages for provider-specific API requirements.
     *
     * @param array $messages Internal message format.
     * @return array Provider-formatted messages.
     */
    public function format_messages( $messages );

    /**
     * Validate API key with minimal API call.
     *
     * @return bool|WP_Error True if valid, WP_Error otherwise.
     */
    public function validate_api_key();

    /**
     * Get available models for this provider.
     *
     * @return array Model identifiers with display names.
     */
    public function get_models();

    /**
     * Get human-readable provider name.
     *
     * @return string Provider name (e.g., 'OpenAI', 'Google Gemini').
     */
    public function get_provider_name();

    /**
     * Check if provider is configured and available.
     *
     * @return bool True if API key exists and provider is ready.
     */
    public function is_available();
}

Response Structure

All providers must return responses in this normalized format:

[
    'content'  => 'AI response text',
    'tokens'   => 150,                    // Total tokens used
    'model'    => 'gpt-4o-mini',          // Model identifier
    'provider' => 'openai',               // Added by orchestrator
    'duration' => 1250.5,                 // Milliseconds (added by orchestrator)
    // Optional for function calling:
    'function_call' => [
        'name'      => 'search_products',
        'arguments' => '{"query": "mattress"}'
    ]
]

3. AI Providers

Claude Provider

File: /includes/AI/Providers/class-claude-provider.php

Claude uses Anthropic's Messages API with a distinct message format where system prompts are passed separately.

API Configuration

private const API_BASE_URL = 'https://api.anthropic.com/v1';
private const API_VERSION = '2023-06-01';
private const DEFAULT_MODEL = 'claude-3-5-sonnet-20241022';

Supported Models

Model ID Description
claude-3-5-sonnet-20241022 Most capable, balanced performance
claude-3-5-haiku-20241022 Fast and affordable
claude-3-opus-20240229 Legacy, highest capability
claude-3-sonnet-20240229 Legacy, balanced

Message Formatting

Claude requires system prompts as a separate parameter:

public function format_messages( $messages ) {
    $system   = '';
    $formatted = [];

    foreach ( $messages as $message ) {
        if ( 'system' === $message['role'] && empty( $system ) ) {
            $system = $message['content'];
            continue;
        }
        if ( in_array( $message['role'], ['user', 'assistant'], true ) ) {
            $formatted[] = [
                'role'    => $message['role'],
                'content' => $message['content'],
            ];
        }
    }

    return [
        'system'   => $system,
        'messages' => $formatted,
    ];
}

Request Payload

$payload = [
    'model'       => 'claude-3-5-sonnet-20241022',
    'max_tokens'  => 1000,
    'temperature' => 0.7,
    'system'      => $system_prompt,
    'messages'    => $formatted_messages,
];

OpenAI Provider

File: /includes/AI/Providers/class-openai-provider.php

OpenAI provides both chat completions and embeddings, making it essential for RAG implementations.

API Configuration

private const API_BASE_URL = 'https://api.openai.com/v1';
private const DEFAULT_MODEL = 'gpt-4o-mini';
private const DEFAULT_EMBEDDING_MODEL = 'text-embedding-3-small';

Supported Models

Model ID Use Case
gpt-4o Most capable, multimodal
gpt-4o-mini Balanced cost/performance
gpt-4-turbo High capability with vision
gpt-3.5-turbo Fast, cost-effective

Embedding Generation

OpenAI is prioritized for embedding requests due to superior embedding models:

public function generate_embedding( $messages ) {
    $text = $messages[0]['content'];

    $payload = [
        'model' => 'text-embedding-3-small',
        'input' => $text,
    ];

    $response = $this->client->post( '/embeddings', ['json' => $payload] );

    return [
        'embedding' => $body['data'][0]['embedding'],
        'tokens'    => $body['usage']['total_tokens'],
        'model'     => $this->embedding_model,
    ];
}

Function Calling Format

OpenAI uses structured tool_calls in responses:

// Response structure
[
    'choices' => [
        [
            'message' => [
                'tool_calls' => [
                    [
                        'id' => 'call_abc123',
                        'function' => [
                            'name' => 'search_products',
                            'arguments' => '{"query":"laptop"}'
                        ]
                    ]
                ]
            ]
        ]
    ]
]

Gemini Provider

File: /includes/AI/Providers/GeminiProvider.php

Google's Gemini uses a different API structure with contents and parts.

API Configuration

private const API_BASE_URL = 'https://generativelanguage.googleapis.com/v1beta/';
private const DEFAULT_MODEL = 'gemini-2.5-flash-lite';

Supported Models

Model ID Description
gemini-2.5-flash-lite Best value, fast
gemini-2.0-flash Stable performance
gemini-2.5-flash Premium capabilities

Message Formatting

Gemini uses user/model roles with parts structure:

public function format_messages( $messages ) {
    $formatted = [];

    foreach ( $messages as $message ) {
        $role = 'assistant' === $message['role'] ? 'model' : 'user';

        $formatted[] = [
            'role'  => $role,
            'parts' => [
                ['text' => $message['content']]
            ],
        ];
    }

    return $formatted;
}

Function Declaration Format

Gemini requires tools in function_declarations format:

private function format_tools_for_gemini( $tools ) {
    $function_declarations = [];

    foreach ( $tools as $tool ) {
        if ( isset( $tool['function'] ) ) {
            $function_declarations[] = $tool['function'];
        }
    }

    return [
        ['function_declarations' => $function_declarations]
    ];
}

Function Call Response

Gemini returns function calls differently:

// Response structure
[
    'candidates' => [
        [
            'content' => [
                'parts' => [
                    [
                        'functionCall' => [
                            'name' => 'search_products',
                            'args' => ['query' => 'mattress']
                        ]
                    ]
                ]
            ]
        ]
    ]
]

4. AI Orchestrator

File: /includes/AI/class-ai-orchestrator.php

The orchestrator is the central coordinator for all AI operations, implementing singleton pattern with comprehensive provider management.

Obtaining Instance

$orchestrator = \WooAIChatbot\AI\AIOrchestrator::instance();

Provider Priority Configuration

Priority is configured via environment variable or WordPress transient:

# .env
AI_PROVIDER_PRIORITY=gemini,openai,claude
// Programmatic priority change
$orchestrator->set_primary_provider( 'openai' );

Response Generation with Fallback

public function generate_response( $messages, $context = [] ) {
    // Prioritize OpenAI for embedding requests
    if ( $context['embedding'] ?? false ) {
        $priority_order = array_merge(
            ['openai'],
            array_diff( $this->priority, ['openai'] )
        );
    }

    foreach ( $priority_order as $provider_name ) {
        $provider = $this->get_provider( $provider_name );

        if ( ! $provider || ! $provider->is_available() ) {
            continue;
        }

        $response = $provider->generate_response( $messages, $context );

        if ( ! is_wp_error( $response ) ) {
            $this->record_usage( $provider_name, true, $duration );
            $response['provider'] = $provider_name;
            return $response;
        }

        // Handle rate limiting with retry
        if ( 'rate_limit_exceeded' === $response->get_error_code() ) {
            sleep( 5 );
            $retry = $provider->generate_response( $messages, $context );
            if ( ! is_wp_error( $retry ) ) {
                return $retry;
            }
        }
    }

    return new WP_Error( 'all_providers_failed', 'All AI providers failed.' );
}

Rate Limiting Strategy

Each provider implements exponential backoff:

// On 429 (Rate Limit) response
sleep( self::RETRY_DELAY * $attempts ); // 2s, 4s, 6s...

Usage Statistics

$stats = $orchestrator->get_usage_stats();
// Returns:
[
    'openai' => [
        'total'        => 150,
        'success'      => 145,
        'failure'      => 5,
        'success_rate' => 96.67,
        'avg_time'     => 1250.5
    ]
]

Caching Strategy

Transients are used for caching with 24-hour expiration:


5. Prompt Builder

File: /includes/AI/class-prompt-builder.php

The PromptBuilder constructs comprehensive system prompts with WooCommerce context, user data, and tool instructions.

Building System Prompts

$builder = new \WooAIChatbot\AI\PromptBuilder();

$context = [
    'products' => $search_results,
    'cart' => [
        'items' => [...],
        'total' => '$150.00'
    ],
    'user' => [
        'name' => 'John',
        'orders_count' => 5
    ],
    'current_product' => $product_data,
    'recently_viewed' => [123, 456, 789]
];

$system_prompt = $builder->build_system_prompt( $context );

Context Injection Layers

  1. Store Context: Shop name, URL, locale
  2. User Context: Name, email, order history, spending
  3. Cart Context: Items, quantities, totals
  4. Product Context: Current page product, recently viewed
  5. Tool Context: Available function definitions
  6. Language Rules: Multilingual detection and response

WooCommerce Context Assembly

public function build_context_from_session( $session ) {
    $context = [];

    // User context
    if ( is_user_logged_in() ) {
        $user = wp_get_current_user();
        $context['user'] = [
            'id'           => $user->ID,
            'name'         => $user->display_name,
            'orders_count' => wc_get_customer_order_count( $user->ID ),
            'total_spent'  => wc_get_customer_total_spent( $user->ID ),
        ];
    }

    // Cart context
    if ( WC()->cart ) {
        $context['cart'] = [
            'items'    => $this->format_cart_items(),
            'subtotal' => WC()->cart->get_subtotal(),
            'total'    => WC()->cart->get_total( 'edit' ),
        ];
    }

    return $context;
}

OpenAI Tool Format Export

$openai_tools = $builder->get_openai_tools();
// Returns OpenAI function-calling schema:
[
    [
        'type' => 'function',
        'function' => [
            'name' => 'search_products',
            'description' => 'Search for products...',
            'parameters' => [
                'type' => 'object',
                'properties' => [...],
                'required' => []
            ]
        ]
    ]
]

6. Tool System

The tool system enables AI models to execute structured actions within WooCommerce.

ToolInterface

File: /includes/AI/Tools/class-tool-interface.php

namespace WooAIChatbot\AI\Tools;

interface ToolInterface {
    /**
     * Unique tool identifier used by AI.
     */
    public function get_name();

    /**
     * OpenAI-compatible function schema.
     */
    public function get_schema();

    /**
     * Execute tool with validated parameters.
     */
    public function execute( $params );

    /**
     * Validate parameters before execution.
     */
    public function validate_params( $params );
}

Tool Registry

File: /includes/AI/class-tool-registry.php

$registry = \WooAIChatbot\AI\ToolRegistry::instance();

// Register custom tool
$registry->register_tool( new MyCustomTool() );

// Execute tool
$result = $registry->execute_tool( 'search_products', ['query' => 'laptop'] );

// Get all tool schemas for AI
$schemas = $registry->get_tool_schemas();

SearchProductsTool

File: /includes/AI/Tools/class-search-products-tool.php

Searches WooCommerce products with full-text search, category filtering, and price ranges.

Schema:

[
    'type' => 'function',
    'function' => [
        'name' => 'search_products',
        'description' => 'Search for WooCommerce products...',
        'parameters' => [
            'properties' => [
                'query'     => ['type' => 'string'],
                'category'  => ['type' => 'string'],
                'min_price' => ['type' => 'number'],
                'max_price' => ['type' => 'number'],
                'in_stock'  => ['type' => 'boolean'],
                'limit'     => ['type' => 'integer', 'minimum' => 1, 'maximum' => 20],
                'sort'      => ['type' => 'string']  // 'relevance', 'price_asc', 'price_desc'
            ],
            'required' => []
        ]
    ]
]

Response:

[
    'products' => [
        [
            'id' => 123,
            'name' => 'Premium Mattress',
            'price' => '599.00',
            'price_formatted' => '$599.00',
            'in_stock' => true,
            'categories' => ['Mattresses', 'Bedroom'],
            'permalink' => 'https://...',
            'image_url' => 'https://...',
            'variations' => [...],  // For variable products
            'attributes' => [...]
        ]
    ],
    'count' => 5,
    'query' => ['query' => 'mattress', 'limit' => 5]
]

AddToCartTool

File: /includes/AI/Tools/class-add-to-cart-tool.php

Adds products to cart with quantity and variation support.

Schema:

[
    'function' => [
        'name' => 'add_to_cart',
        'parameters' => [
            'properties' => [
                'product_id'   => ['type' => 'integer'],
                'quantity'     => ['type' => 'integer', 'minimum' => 1, 'maximum' => 100],
                'variation_id' => ['type' => 'integer']
            ],
            'required' => ['product_id']
        ]
    ]
]

CreateCheckoutTool

File: /includes/AI/Tools/class-create-checkout-tool.php

Creates checkout sessions with multiple items and coupon support.

Schema:

[
    'function' => [
        'name' => 'create_checkout',
        'parameters' => [
            'properties' => [
                'items' => [
                    'type' => 'array',
                    'items' => [
                        'properties' => [
                            'product_id' => ['type' => 'integer'],
                            'quantity' => ['type' => 'integer'],
                            'variation_id' => ['type' => 'integer']
                        ]
                    ]
                ],
                'coupon_code' => ['type' => 'string'],
                'clear_cart' => ['type' => 'boolean']
            ],
            'required' => ['items']
        ]
    ]
]

Adding Custom Tools

  1. Create tool class implementing ToolInterface:
namespace WooAIChatbot\AI\Tools;

class GetOrderStatusTool implements ToolInterface {

    public function get_name() {
        return 'get_order_status';
    }

    public function get_schema() {
        return [
            'type' => 'function',
            'function' => [
                'name' => 'get_order_status',
                'description' => 'Get status of customer orders',
                'parameters' => [
                    'type' => 'object',
                    'properties' => [
                        'order_id' => [
                            'type' => 'integer',
                            'description' => 'Specific order ID'
                        ]
                    ],
                    'required' => []
                ]
            ]
        ];
    }

    public function validate_params( $params ) {
        if ( isset( $params['order_id'] ) && ! is_numeric( $params['order_id'] ) ) {
            return new \WP_Error( 'invalid_order_id', 'Order ID must be numeric' );
        }
        return true;
    }

    public function execute( $params ) {
        $order_id = $params['order_id'] ?? null;

        if ( $order_id ) {
            $order = wc_get_order( $order_id );
            if ( ! $order ) {
                return new \WP_Error( 'order_not_found', 'Order not found' );
            }
            return [
                'order_id' => $order_id,
                'status'   => $order->get_status(),
                'total'    => $order->get_total(),
                'date'     => $order->get_date_created()->format( 'Y-m-d' )
            ];
        }

        // Return recent orders for current user
        return $this->get_recent_orders();
    }
}
  1. Register with ToolRegistry:
add_action( 'init', function() {
    $registry = \WooAIChatbot\AI\ToolRegistry::instance();
    $registry->register_tool( new \WooAIChatbot\AI\Tools\GetOrderStatusTool() );
});

7. RAG Integration

The plugin supports Retrieval-Augmented Generation using embeddings and Supabase vector storage.

Semantic Search Flow

User Query
    |
    v
OpenAI Embeddings API --> Query Vector (1536 dimensions)
    |
    v
Supabase pgvector --> Cosine Similarity Search
    |
    v
Top-K Product Results --> Context Injection
    |
    v
AI Provider --> Enhanced Response

Embedding Generation

// Via orchestrator (automatically prioritizes OpenAI)
$result = $orchestrator->generate_response(
    [['role' => 'user', 'content' => 'comfortable mattress']],
    ['embedding' => true]
);

$vector = $result['embedding'];  // 1536-dimensional float array

Context Relevance Scoring

Products are ranked by cosine similarity to query embedding. Top results are injected into the system prompt via PromptBuilder::add_product_context().

Supabase Integration

Vector similarity search uses pgvector extension:

SELECT *, 1 - (embedding <=> query_embedding) AS similarity
FROM products_embeddings
WHERE 1 - (embedding <=> query_embedding) > 0.7
ORDER BY similarity DESC
LIMIT 5;

8. Adding New Provider

Step 1: Create Provider Class

namespace WooAIChatbot\AI\Providers;

use WooAIChatbot\AI\AIProviderInterface;
use GuzzleHttp\Client;
use WP_Error;

class MistralProvider implements AIProviderInterface {

    private const API_BASE_URL = 'https://api.mistral.ai/v1';
    private const DEFAULT_MODEL = 'mistral-large-latest';

    private $client;
    private $api_key;
    private $model;

    public function __construct() {
        $settings = get_option( 'woo_ai_chatbot_settings', [] );
        $this->api_key = $settings['mistral_api_key'] ?? '';
        $this->model = self::DEFAULT_MODEL;

        $this->client = new Client([
            'base_uri' => self::API_BASE_URL,
            'timeout' => 30,
            'headers' => [
                'Authorization' => 'Bearer ' . $this->api_key,
                'Content-Type' => 'application/json'
            ]
        ]);
    }

    public function generate_response( $messages, $context = [] ) {
        if ( ! $this->is_available() ) {
            return new WP_Error( 'mistral_unavailable', 'Mistral not configured' );
        }

        $formatted = $this->format_messages( $messages );

        $payload = [
            'model' => $this->model,
            'messages' => $formatted,
            'temperature' => 0.7,
            'max_tokens' => 1000
        ];

        // Add tools if provided
        if ( ! empty( $context['tools'] ) ) {
            $payload['tools'] = $context['tools'];
        }

        try {
            $response = $this->client->post( '/chat/completions', ['json' => $payload] );
            $body = json_decode( $response->getBody(), true );

            return [
                'content' => $body['choices'][0]['message']['content'],
                'tokens' => $body['usage']['total_tokens'],
                'model' => $this->model
            ];
        } catch ( \Exception $e ) {
            return new WP_Error( 'mistral_error', $e->getMessage() );
        }
    }

    public function format_messages( $messages ) {
        // Mistral uses same format as OpenAI
        return array_map( function( $msg ) {
            return [
                'role' => $msg['role'],
                'content' => $msg['content']
            ];
        }, $messages );
    }

    public function validate_api_key() {
        // Minimal validation request
    }

    public function get_models() {
        return [
            'mistral-large-latest' => 'Mistral Large',
            'mistral-small-latest' => 'Mistral Small'
        ];
    }

    public function get_provider_name() {
        return 'Mistral AI';
    }

    public function is_available() {
        return ! empty( $this->api_key );
    }
}

Step 2: Register in Orchestrator

Modify AIOrchestrator::initialize_providers():

$mistral_key = $settings['mistral_api_key'] ?? '';
if ( ! empty( $mistral_key ) ) {
    require_once __DIR__ . '/Providers/class-mistral-provider.php';
    $this->providers['mistral'] = new \WooAIChatbot\AI\Providers\MistralProvider();
}

Step 3: Update Priority

AI_PROVIDER_PRIORITY=gemini,mistral,openai,claude

9. Error Handling

Provider Error Codes

Code Description Action
rate_limit_exceeded 429 response Retry after delay, then fallback
invalid_api_key 401 response Skip provider, log error
provider_unavailable No API key configured Skip to next provider
all_providers_failed All providers exhausted Return error to user

Retry Logic

// Rate limit handling with exponential backoff
if ( 429 === $status_code && $attempts < self::MAX_RETRIES ) {
    sleep( self::RETRY_DELAY * $attempts );
    continue;
}

Fallback Behavior

When a provider fails:

  1. Error is logged with full context
  2. Usage statistics updated (record_usage())
  3. Next provider in priority chain is attempted
  4. After all providers fail, detailed error message returned

Logging

All providers use Monolog with rotating file handler:

$this->logger->error( 'API request failed', [
    'provider' => 'gemini',
    'status_code' => 429,
    'attempt' => 2,
    'error' => $e->getMessage()
]);

Log location: /logs/chatbot.log with 7-day retention.

Structured Error Response

return new WP_Error(
    'all_providers_failed',
    'All AI providers failed to generate a response.',
    [
        'attempts' => [
            ['provider' => 'gemini', 'error' => 'Rate limit', 'code' => 429],
            ['provider' => 'openai', 'error' => 'Invalid key', 'code' => 401]
        ],
        'duration' => 5200,
        'tried' => ['gemini', 'openai', 'claude']
    ]
);

Summary

The WooAI Chatbot Pro AI subsystem provides a robust, extensible architecture for multi-provider AI integration:

For questions or contributions, refer to the plugin's GitHub repository or contact the development team.