CoAI LogoCoAI.Dev
Features

Model Marketplace

Discover, configure, and manage AI models in CoAI.Dev's marketplace

Model Marketplace

The Model Marketplace is CoAI.Dev's centralized hub for discovering, configuring, and managing AI models. It provides an intuitive interface for exploring available models, understanding their capabilities, and organizing your workspace for optimal productivity.

Overview

The marketplace serves two main functions:

  1. Discovery: Browse and learn about available AI models from your configured channels
  2. Workspace Management: Organize and customize which models appear in your chat interface

Workspace vs Marketplace

  • Marketplace: Shows all available models from your configured channels
  • Workspace: Shows only the models you've added for active use in conversations
  • Only workspace models appear in the main chat interface

Marketplace Features

🎯 Model Discovery

Browse models with rich information:

  • Model Cards: Visual cards with names, descriptions, and capabilities
  • Category Filtering: Filter by model type (text, image, audio, code)
  • Provider Grouping: Organize by AI service provider
  • Capability Tags: Quick identification of model features
  • Performance Metrics: Response time, cost, and availability data

📊 Model Information

Each model provides comprehensive details:

  • Description: What the model does and its strengths
  • Use Cases: Recommended applications and scenarios
  • Parameters: Context length, max tokens, supported features
  • Pricing: Token costs and rate limits
  • Examples: Sample inputs and outputs
  • Documentation: Links to provider documentation

🏷️ Smart Categorization

Models are automatically categorized by:

General Purpose Language Models

  • GPT-4: Advanced reasoning and complex tasks
  • GPT-3.5 Turbo: Fast, cost-effective conversations
  • Claude: Detailed analysis and creative writing
  • Gemini Pro: Multimodal capabilities

Characteristics:

  • High-quality text generation
  • Conversational abilities
  • Various context lengths
  • Different pricing tiers

Workspace Management

Adding Models to Workspace

Browse the Marketplace

  1. Navigate to Model Marketplace from the main menu
  2. Use filters to find models by category, provider, or capabilities
  3. Click on a model card to view detailed information

Review Model Details

Examine the model's:

  • Capabilities: What tasks it can perform
  • Performance: Speed and quality metrics
  • Cost: Token pricing and rate limits
  • Requirements: Any special configuration needed

Add to Workspace

  1. Click "Add to Workspace" on the model card
  2. Customize the display name and description if desired
  3. Set any model-specific parameters
  4. The model will now appear in your chat interface

Configure Display Settings

Customize how the model appears:

  • Display Name: Custom name for your workspace
  • Description: Personal notes about usage
  • Tags: Custom categorization
  • Order: Position in workspace list

Workspace Organization

Organize your models for efficient access:

📋 My Workspace
├── 🎯 Primary Models
│   ├── GPT-4 (Complex tasks)
│   ├── Claude (Writing & analysis)
│   └── Gemini Pro (Multimodal)
├── ⚡ Quick Tasks
│   ├── GPT-3.5 Turbo (Fast chat)
│   └── Code Llama (Programming)
└── 🎨 Creative
    ├── DALL-E 3 (Images)
    └── Midjourney (Art)

Model Configuration

Each workspace model can be customized:

{
  "model_id": "gpt-4",
  "display_name": "GPT-4 Pro",
  "description": "For complex analysis and reasoning",
  "custom_parameters": {
    "temperature": 0.7,
    "max_tokens": 4096,
    "system_prompt": "You are a helpful assistant..."
  },
  "tags": ["analysis", "writing", "reasoning"],
  "enabled": true,
  "show_in_workspace": true
}

Model Categories and Use Cases

Text Generation Models

ModelBest ForContext LengthSpeedCost
GPT-4Complex reasoning, analysis8K-32KMediumHigh
GPT-3.5 TurboQuick conversations4K-16KFastLow
Claude 3Writing, coding, analysis200KMediumMedium
Gemini ProMultimodal tasks32KFastMedium

Specialized Models

ModelPurposeInput TypesOutput
DALL-E 3Image generationText promptsImages
WhisperSpeech recognitionAudio filesText
EmbeddingsSimilarity searchTextVectors
ModerationContent filteringTextSafety scores

Advanced Features

Custom Model Integration

Add private or custom models:

Configure Channel

Set up a channel pointing to your custom model endpoint:

{
  "name": "Custom Model",
  "type": "openai",
  "base_url": "https://your-model-api.com/v1",
  "api_key": "your-api-key"
}

Define Model Metadata

Create model information:

{
  "id": "custom-model-v1",
  "name": "My Custom Model",
  "description": "Specialized model for specific tasks",
  "capabilities": ["text-generation", "custom-domain"],
  "context_length": 4096,
  "pricing": {
    "input_tokens": 0.001,
    "output_tokens": 0.002
  }
}

Add to Marketplace

Your custom model will appear in the marketplace alongside standard models, ready to be added to workspaces.

Model Templates

Create reusable model configurations:

{
  "template_name": "Code Review Assistant",
  "base_model": "gpt-4",
  "system_prompt": "You are an expert code reviewer...",
  "parameters": {
    "temperature": 0.3,
    "max_tokens": 2048
  },
  "suggested_use_cases": [
    "Code review",
    "Bug detection", 
    "Optimization suggestions"
  ]
}

A/B Testing

Compare model performance:

Model Comparison

Set up side-by-side comparisons to:

  • Test different models on the same prompts
  • Compare response quality and speed
  • Analyze cost-effectiveness
  • Make data-driven model selection decisions

Best Practices

Workspace Organization

  • Categorize by Purpose: Group models by use case rather than provider
  • Limit Active Models: Keep workspace focused with 5-10 frequently used models
  • Use Descriptive Names: Add context to model names for quick identification
  • Regular Cleanup: Remove unused models to maintain organization

Model Selection

  • Match Task Complexity: Use appropriate model power for the task
  • Consider Cost: Balance performance needs with budget constraints
  • Test Performance: Evaluate models with your specific use cases
  • Monitor Usage: Track which models provide the best value

Performance Optimization

  • Cache Common Responses: Enable caching for frequently asked questions
  • Batch Similar Requests: Group related queries for efficiency
  • Optimize Prompts: Craft prompts that work well with specific models
  • Monitor Metrics: Track response times and success rates

Troubleshooting

Model Not Appearing

Common Issues

Problem: Model doesn't show in marketplace

Solutions:

  1. Check if the channel providing the model is enabled
  2. Verify API key has access to the model
  3. Ensure model is supported by the channel configuration
  4. Check for any regional restrictions

Performance Issues

Slow Responses:

  • Check channel health and response times
  • Consider using faster models for simple tasks
  • Verify network connectivity to model providers
  • Review prompt complexity and length

High Costs:

  • Monitor token usage patterns
  • Use appropriate models for task complexity
  • Implement caching for repeated queries
  • Set usage quotas and alerts

Configuration Problems

Model Won't Add to Workspace:

  1. Verify you have permission to access the model
  2. Check for workspace limits or quotas
  3. Ensure model is properly configured in channels
  4. Review any error messages in logs

Integration Examples

Custom Workspace Setup

// Example: Setting up a development-focused workspace
const devWorkspace = {
  models: [
    {
      id: "gpt-4",
      name: "Code Architect",
      prompt: "You are a senior software architect...",
      tags: ["architecture", "design"]
    },
    {
      id: "code-llama",
      name: "Code Assistant", 
      prompt: "You are a coding assistant...",
      tags: ["coding", "debugging"]
    },
    {
      id: "gpt-3.5-turbo",
      name: "Quick Helper",
      prompt: "You are a helpful assistant...",
      tags: ["general", "quick"]
    }
  ]
};

Automated Model Selection

# Example: Dynamic model selection based on task type
def select_model(task_type, complexity):
    if task_type == "code" and complexity == "high":
        return "gpt-4"
    elif task_type == "code" and complexity == "low":
        return "code-llama"
    elif task_type == "general":
        return "gpt-3.5-turbo"
    else:
        return "claude-3"

Ready to explore models? Navigate to the model marketplace in your CoAI.Dev admin panel to discover available models, or check out Channel Management to add more AI providers to your instance.