Skip to content

Mapping of tools and technologies that enable AI agent development

License

Notifications You must be signed in to change notification settings

daytonaio/ai-enablement-stack

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

đź”® AI Enablement Stack đź”®

A comprehensive list of tools and technologies for enabling agentic AI development

AI Enablement Stack

Welcome to AI Enablement Stack mapping. The list is structured into layers based on their functionality in the agentic AI development ecosystem:

AGENT CONSUMER LAYER: The interface layer where AI agents interact with users and systems. This includes standalone autonomous agents, assistive tools that enhance human capabilities, and specialized agents built for specific tasks. It's where AI capabilities are packaged into practical, user-facing applications.

OBSERVABILITY AND GOVERNANCE LAYER: The control layer for monitoring, evaluating, securing, and governing AI systems. This layer handles everything from development pipelines and performance monitoring to risk management and compliance. It ensures AI systems operate reliably and meet organizational standards.

ENGINEERING LAYER: The developer's toolkit for building AI applications. This layer provides essential resources for training models, developing applications, and ensuring quality through testing. It provides tools and methods for transforming raw AI capabilities into production-ready solutions.

INTELLIGENCE LAYER: The cognitive core of AI systems. This layer contains the frameworks, knowledge engines, and specialized models that power AI applications. It manages the actual processing, decision-making, and information retrieval that makes AI systems intelligent.

INFRASTRUCTURE LAYER: The foundation that powers AI development and deployment. This includes development workspaces, model serving infrastructure, and cloud computing resources. It provides the essential computing backbone that supports all AI operations.

How to Contribute

To contribute to this list:

  1. Read the CONTRIBUTING.md
  2. Fork the repository
  3. Add logo under the ./public/images/ folder
  4. Add your tool in the appropriate category in the file ai-enablement-stack.json
  5. Submit a pull request

AGENT CONSUMER LAYER

Autonomous Agents

Self-operating AI systems that can complete complex tasks independently

Devin

Category

AGENT CONSUMER LAYER - Autonomous Agents

Description

Cognition develops Devin, the world's first AI software engineer, designed to work as a collaborative teammate that helps engineering teams scale their capabilities through parallel task execution and comprehensive development support.

Links

OpenHands

Category

AGENT CONSUMER LAYER - Autonomous Agents

Description
  • No description available
Links

Lovable

Category

AGENT CONSUMER LAYER - Autonomous Agents

Description
  • No description available
Links

AutoGen

Category

AGENT CONSUMER LAYER - Autonomous Agents

Description
  • No description available
Links

AgentGPT

Category

AGENT CONSUMER LAYER - Autonomous Agents

Description
  • No description available
Links

Superagent

Category

AGENT CONSUMER LAYER - Autonomous Agents

Description
  • No description available
Links

Morph

Category

AGENT CONSUMER LAYER - Autonomous Agents

Description

Morph AI delivers an enterprise-grade developer assistant that automates engineering tasks across multiple languages and frameworks, enabling developers to focus on high-impact work while ensuring code quality through automated testing and compliance.

Links

Assistive Agents

AI tools that enhance human capabilities and workflow efficiency

Copilot

Category

AGENT CONSUMER LAYER - Assistive Agents

Description
  • No description available
Links

Continue.dev

Category

AGENT CONSUMER LAYER - Assistive Agents

Description
  • No description available
Links

Cody

Category

AGENT CONSUMER LAYER - Assistive Agents

Description

Sourcegraph's Cody is an AI coding assistant that combines the latest LLM models (including Claude 3 and GPT-4) with comprehensive codebase context to help developers write, understand, and fix code across multiple IDEs, while offering enterprise-grade security and flexible deployment options.

Links

Cursor

Category

AGENT CONSUMER LAYER - Assistive Agents

Description
  • No description available
Links

Tabnine

Category

AGENT CONSUMER LAYER - Assistive Agents

Description

Tabnine provides a privacy-focused AI code assistant that offers personalized code generation, testing, and review capabilities, featuring bespoke models trained on team codebases, zero data retention, and enterprise-grade security with support for on-premises deployment.

Links

Supermaven

Category

AGENT CONSUMER LAYER - Assistive Agents

Description

Supermaven provides ultra-fast code completion and assistance with a 1M token context window, supporting multiple IDEs (VS Code, JetBrains, Neovim) and LLMs (GPT-4, Claude 3.5), featuring real-time chat interface, codebase scanning, and 3x faster response times compared to competitors.

Links

Windsurf

Category

AGENT CONSUMER LAYER - Assistive Agents

Description

Windsurf provides an agentic IDE that combines copilot and agent capabilities through 'Flows', featuring Cascade for deep contextual awareness, multi-file editing, command suggestions, and LLM-based search tools, all integrated into a VS Code-based editor for seamless AI-human collaboration.

Links

Goose

Category

AGENT CONSUMER LAYER - Assistive Agents

Description

Goose is an open-source autonomous developer agent that operates directly on your machine, capable of executing shell commands, debugging code, managing dependencies, and interacting with development tools like GitHub and Jira, featuring extensible toolkits and support for multiple LLM providers.

Links

Hex

Category

AGENT CONSUMER LAYER - Assistive Agents

Description

Hex provides an AI-powered analytics platform featuring Magic AI for query writing, chart building, and debugging, combining LLM capabilities with data warehouse context and semantic models to assist with SQL, Python, and visualization tasks while maintaining enterprise-grade security.

Links

Specialized Agents

Purpose-built AI agents designed for specific functions, like PR reviews and similar.

CodeRabbit

Category

AGENT CONSUMER LAYER - Specialized Agents

Description
  • No description available
Links

Qodo

Category

AGENT CONSUMER LAYER - Specialized Agents

Description
  • No description available
Links

Ellipsis

Category

AGENT CONSUMER LAYER - Specialized Agents

Description

Ellipsis provides AI-powered code reviews and automated bug fixes for GitHub repositories, offering features like style guide enforcement, code generation, and automated testing while maintaining SOC 2 Type 1 compliance and secure processing without data retention.

Links

Codeflash

Category

AGENT CONSUMER LAYER - Specialized Agents

Description

Codeflash is a CI tool that keeps your Python code performant by using AI to automatically find the most optimized version of your code through benchmarking and verifying the new code for correctness.

Links

Superflex

Category

AGENT CONSUMER LAYER - Specialized Agents

Description

Superflex is a VSCode Extension that builds features from Figma designs, images and text prompts, while maintaining your design standards, code style, and reusing your UI components.

Links

Codemod

Category

AGENT CONSUMER LAYER - Specialized Agents

Description

Codemod provides AI-powered code migration agents that automate framework migrations, API upgrades, and refactoring at scale, featuring a community registry of migration recipes, AI-assisted codemod creation, and comprehensive migration management capabilities.

Links

Codegen

Category

AGENT CONSUMER LAYER - Specialized Agents

Description

Codegen provides enterprise-grade static analysis and codemod capabilities for large-scale code transformations, featuring advanced visualization tools, automated documentation generation, and platform engineering templates, with SOC 2 Type II certification for secure refactoring at scale.

Links

OBSERVABILITY AND GOVERNANCE LAYER

Development Pipeline

Tools for managing and monitoring AI application lifecycles

Portkey

Category

OBSERVABILITY AND GOVERNANCE LAYER - Development Pipeline

Description

Portkey provides a comprehensive AI gateway and control panel that enables teams to route to 200+ LLMs, implement guardrails, manage prompts, and monitor AI applications with detailed observability features while maintaining SOC2 compliance and HIPAA/GDPR standards.

Links

Baseten

Category

OBSERVABILITY AND GOVERNANCE LAYER - Development Pipeline

Description

Baseten provides high-performance inference infrastructure featuring up to 1,500 tokens/second throughput, sub-100ms latency, and GPU autoscaling, with Truss open-source model packaging, enterprise security (SOC2, HIPAA), and support for deployment in customer clouds or self-hosted environments.

Links

LangServe

Category

OBSERVABILITY AND GOVERNANCE LAYER - Development Pipeline

Description
  • No description available
Links

Stack AI

Category

OBSERVABILITY AND GOVERNANCE LAYER - Development Pipeline

Description

Stack AI provides an enterprise generative AI platform for building and deploying AI applications with a no-code interface, offering pre-built templates, workflow automation, enterprise security features (SOC2, HIPAA, GDPR), and on-premise deployment options with support for multiple AI models and data sources.

Links

Evaluation & Monitoring

Systems for tracking AI performance and behavior

Pydantic Logfire

Category

OBSERVABILITY AND GOVERNANCE LAYER - Evaluation & Monitoring

Description
  • No description available
Links

Cleanlab

Category

OBSERVABILITY AND GOVERNANCE LAYER - Evaluation & Monitoring

Description

Cleanlab provides an AI-powered data curation platform that helps organizations improve their GenAI and ML solutions by automatically detecting and fixing data quality issues, reducing hallucinations, and enabling trustworthy AI deployment while offering VPC integration for enhanced security.

Links

Patronus

Category

OBSERVABILITY AND GOVERNANCE LAYER - Evaluation & Monitoring

Description

Patronus provides a comprehensive AI evaluation platform built on industry-leading research, offering features for testing hallucinations, security risks, alignment, and performance monitoring, with both pre-built evaluators and custom evaluation capabilities for RAG systems and AI agents.

Links

Log10

Category

OBSERVABILITY AND GOVERNANCE LAYER - Evaluation & Monitoring

Description

Log10 provides an end-to-end AI accuracy platform for evaluating and monitoring LLM applications in high-stakes industries, featuring expert-driven evaluation, automated feedback systems, real-time monitoring, and continuous improvement workflows with built-in security and compliance features.

Links

Traceloop

Category

OBSERVABILITY AND GOVERNANCE LAYER - Evaluation & Monitoring

Description

Traceloop provides open-source LLM monitoring through OpenLLMetry, offering real-time hallucination detection, output quality monitoring, and prompt debugging capabilities across 22+ LLM providers with zero-intrusion integration.

Links

WhyLabs

Category

OBSERVABILITY AND GOVERNANCE LAYER - Evaluation & Monitoring

Description

WhyLabs provides a comprehensive AI Control Center for monitoring, securing, and optimizing AI applications, offering real-time LLM monitoring, security guardrails, and privacy-preserving observability with SOC 2 Type 2 compliance and support for multiple modalities.

Links

OpenLLMetry

Category

OBSERVABILITY AND GOVERNANCE LAYER - Evaluation & Monitoring

Description

OpenLLMetry provides an open-source observability solution for LLMs built on OpenTelemetry standards, offering easy integration with major observability platforms like Datadog, New Relic, and Grafana, requiring just two lines of code to implement.

Links

LangWatch

Category

OBSERVABILITY AND GOVERNANCE LAYER - Evaluation & Monitoring

Description

LangWatch provides a comprehensive LLMOps platform for optimizing and monitoring LLM performance, featuring automated prompt optimization using DSPy, quality evaluations, performance monitoring, and collaborative tools for AI teams, with enterprise-grade security and self-hosting options.

Links

Risk & Compliance

Frameworks for ensuring responsible AI use and regulatory compliance

Alinia

Category

OBSERVABILITY AND GOVERNANCE LAYER - Risk & Compliance

Description
  • No description available
Links

Guardrails AI

Category

OBSERVABILITY AND GOVERNANCE LAYER - Risk & Compliance

Description
  • No description available
Links

Lakera

Category

OBSERVABILITY AND GOVERNANCE LAYER - Risk & Compliance

Description
  • No description available
Links

Security & Access Control

Tools for protecting AI systems and managing access and user permissions

LiteLLM

Category

OBSERVABILITY AND GOVERNANCE LAYER - Security & Access Control

Description

LiteLLM provides a unified API gateway for managing 100+ LLM providers with OpenAI-compatible formatting, offering features like authentication, load balancing, spend tracking, and monitoring integrations, available both as an open-source solution and enterprise service.

Links

Martian

Category

OBSERVABILITY AND GOVERNANCE LAYER - Security & Access Control

Description

Martian provides an intelligent LLM routing system that dynamically selects the optimal model for each request, featuring performance prediction, automatic failover, cost optimization (up to 98% savings), and simplified integration, outperforming single models like GPT-4 while ensuring high uptime.

Links

ENGINEERING LAYER

Training & Fine-Tuning

Resources for customizing and optimizing AI models

Lamini

Category

ENGINEERING LAYER - Training & Fine-Tuning

Description

Provides tools for efficient fine-tuning of large language models, including techniques like quantization and memory optimization.

Links

Predibase

Category

ENGINEERING LAYER - Training & Fine-Tuning

Description

Platform for building and deploying machine learning models, with a focus on simplifying the development process and enabling faster iteration.

Links

Modal

Category

ENGINEERING LAYER - Training & Fine-Tuning

Description

Modal offers a serverless cloud platform for AI and ML applications that enables developers to deploy and scale workloads instantly with simple Python code, featuring high-performance GPU infrastructure and pay-per-use pricing.

Links

Bloop

Category

ENGINEERING LAYER - Training & Fine-Tuning

Description
  • No description available
Links

Julius

Category

ENGINEERING LAYER - Training & Fine-Tuning

Description
  • No description available
Links

Fine Tuner

Category

ENGINEERING LAYER - Training & Fine-Tuning

Description
  • No description available
Links

Codeanywhere

Category

ENGINEERING LAYER - Training & Fine-Tuning

Description

Provides workspaces with GPU

Links

Lightning AI

Category

ENGINEERING LAYER - Training & Fine-Tuning

Description

Lightning AI provides a comprehensive platform for building AI products, featuring GPU access, development environments, training capabilities, and deployment tools, with support for enterprise-grade security, multi-cloud deployment, and team collaboration, used by major organizations like NVIDIA and Microsoft.

Links

Tools

Development utilities, libraries and services for building AI applications

Relevance AI

Category

ENGINEERING LAYER - Tools

Description

Relevance AI provides a no-code AI workforce platform that enables businesses to build, customize, and manage AI agents and tools for various functions like sales and support, featuring Bosh, their AI Sales Agent, while ensuring enterprise-grade security and compliance.

Links

Greptile

Category

ENGINEERING LAYER - Tools

Description

Greptile provides an AI-powered code analysis platform that helps software teams ship faster by offering intelligent code reviews, codebase chat, and custom dev tools with full contextual understanding, while maintaining SOC2 Type II compliance and optional self-hosting capabilities.

Links

Sourcegraph

Category

ENGINEERING LAYER - Tools

Description

Sourcegraph provides a code intelligence platform featuring Cody, an AI coding assistant, and advanced code search capabilities that help developers navigate, understand, and modify complex codebases while automating routine tasks across enterprise environments.

Links

PromptLayer

Category

ENGINEERING LAYER - Tools

Description

PromptLayer provides a comprehensive prompt engineering platform that enables technical and non-technical teams to collaboratively edit, evaluate, and deploy LLM prompts through a visual CMS, while offering version control, A/B testing, and monitoring capabilities with SOC 2 Type 2 compliance.

Links

Gretel.ai

Category

ENGINEERING LAYER - Tools

Description
  • No description available
Links

Mostly.ai

Category

ENGINEERING LAYER - Tools

Description
  • No description available
Links

Tonic.ai

Category

ENGINEERING LAYER - Tools

Description
  • No description available
Links

Rockfish.ai

Category

ENGINEERING LAYER - Tools

Description
  • No description available
Links

JigsawStack

Category

ENGINEERING LAYER - Tools

Description

JigsawStack provides a comprehensive suite of AI APIs including web scraping, translation, speech-to-text, OCR, prediction, and prompt optimization, offering globally distributed infrastructure with type-safe SDKs and built-in monitoring capabilities across 99+ locations.

Links

Testing & Quality Assurance

Systems for validating AI performance and reliability

LangSmith

Category

ENGINEERING LAYER - Testing & Quality Assurance

Description
  • No description available
Links

Galileo

Category

ENGINEERING LAYER - Testing & Quality Assurance

Description
  • No description available
Links

Arize

Category

ENGINEERING LAYER - Testing & Quality Assurance

Description
  • No description available
Links

Weight & Biases

Category

ENGINEERING LAYER - Testing & Quality Assurance

Description
  • No description available
Links

AgentOps

Category

ENGINEERING LAYER - Testing & Quality Assurance

Description
  • No description available
Links

Confident AI

Category

ENGINEERING LAYER - Testing & Quality Assurance

Description

Confident AI provides an LLM evaluation platform that enables organizations to benchmark, unit test, and monitor their LLM applications through automated regression testing, A/B testing, and synthetic dataset generation, while offering research-backed evaluation metrics and comprehensive observability features.

Links

ContextQA

Category

ENGINEERING LAYER - Testing & Quality Assurance

Description

AI agent specifically designed for software testing and quality assurance, automating the testing process and providing comprehensive test coverage.

Links

Braintrust

Category

ENGINEERING LAYER - Testing & Quality Assurance

Description

Braintrust provides an end-to-end platform for evaluating and testing LLM applications, offering features like prompt testing, custom scoring, dataset management, real-time tracing, and production monitoring, with support for both UI-based and SDK-driven workflows.

Links

INTELLIGENCE LAYER

Frameworks

Core libraries and building blocks for AI application development

LangChain

Category

INTELLIGENCE LAYER - Frameworks

Description
  • No description available
Links

LlamaIndex

Category

INTELLIGENCE LAYER - Frameworks

Description
  • No description available
Links

Haystack

Category

INTELLIGENCE LAYER - Frameworks

Description
  • No description available
Links

DSPy

Category

INTELLIGENCE LAYER - Frameworks

Description
  • No description available
Links

Pydantic AI

Category

INTELLIGENCE LAYER - Frameworks

Description
  • No description available
Links

Letta

Category

INTELLIGENCE LAYER - Frameworks

Description

Provides an agent development platform with advanced memory management for LLMs, enabling developers to build, deploy, and scale production-ready AI agents with transparent reasoning and model-agnostic flexibility.

Links

AutoGen

Category

INTELLIGENCE LAYER - Frameworks

Description

Framework for developing LLM applications with multiple conversational agents that collaborate and interact with humans.

Links
Category

INTELLIGENCE LAYER - Frameworks

Description

A framework for creating and managing workflows and tasks for AI agents.

Links

Toolhouse

Category

INTELLIGENCE LAYER - Frameworks

Description

Toolhouse provides a cloud infrastructure platform and universal SDK that enables developers to equip LLMs with actions and knowledge through a Tool Store, offering pre-built optimized functions, low-latency execution, and cross-LLM compatibility with just three lines of code.

Links

Composio

Category

INTELLIGENCE LAYER - Frameworks

Description

Composio provides an integration platform for AI agents and LLMs with 250+ pre-built tools, managed authentication, and RPA capabilities, enabling developers to easily connect their AI applications with various services while maintaining SOC-2 compliance and supporting multiple agent frameworks.

Links

CrewAI

Category

INTELLIGENCE LAYER - Frameworks

Description

CrewAI provides a comprehensive platform for building, deploying, and managing multi-agent AI systems, offering both open-source framework and enterprise solutions with support for any LLM and cloud platform, enabling organizations to create automated workflows across various industries.

Links
Category

INTELLIGENCE LAYER - Frameworks

Description

AI Suite provides a unified interface for multiple LLM providers (OpenAI, Anthropic, Azure, Google, AWS, Groq, Mistral, etc.), offering standardized API access with OpenAI-compatible syntax, easy provider switching, and seamless integration capabilities, available as an open-source MIT-licensed framework.

Links
Category

INTELLIGENCE LAYER - Frameworks

Description

Promptflow is Microsoft's open-source development framework for LLM applications, offering tools for flow creation, testing, evaluation, and deployment, featuring visual flow design through VS Code extension, built-in evaluation metrics, and CI/CD integration capabilities.

Links

LLMStack

Category

INTELLIGENCE LAYER - Frameworks

Description

LLMStack is an open-source platform for building AI agents, workflows, and applications, featuring model chaining across major providers, data integration from multiple sources (PDFs, URLs, Audio, Drive), and collaborative development capabilities with granular permissions.

Links

Knowledge Engines

Systems for managing and retrieving information

Pinecone

Category

INTELLIGENCE LAYER - Knowledge Engines

Description
  • No description available
Links

Weaviate

Category

INTELLIGENCE LAYER - Knowledge Engines

Description
  • No description available
Links

Chroma

Category

INTELLIGENCE LAYER - Knowledge Engines

Description
  • No description available
Links

Milvus

Category

INTELLIGENCE LAYER - Knowledge Engines

Description
  • No description available
Links

Qdrant

Category

INTELLIGENCE LAYER - Knowledge Engines

Description
  • No description available
Links

MongoDB Atlas

Category

INTELLIGENCE LAYER - Knowledge Engines

Description
  • No description available
Links

Supabase

Category

INTELLIGENCE LAYER - Knowledge Engines

Description

Supabase Vector provides an open-source vector database built on Postgres and pgvector, offering scalable embedding storage, indexing, and querying capabilities with integrated AI tooling for OpenAI and Hugging Face, featuring enterprise-grade security and global deployment options.

Links

Contextual AI

Category

INTELLIGENCE LAYER - Knowledge Engines

Description

Contextual AI provides enterprise-grade RAG (Retrieval-Augmented Generation) solutions that enable organizations in regulated industries to build and deploy production-ready AI applications for searching and analyzing large volumes of business-critical documents.

Links

Unstructured

Category

INTELLIGENCE LAYER - Knowledge Engines

Description

Platform for working with unstructured data, offering tools for data pre-processing, ETL, and integration with LLMs.

Links

Sciphi

Category

INTELLIGENCE LAYER - Knowledge Engines

Description

SciPhi offers R2R, an all-in-one RAG (Retrieval Augmented Generation) solution that enables developers to build and scale AI applications with advanced features including document management, hybrid vector search, and knowledge graphs, while providing superior ingestion performance compared to competitors.

Links

pgAI

Category

INTELLIGENCE LAYER - Knowledge Engines

Description

pgAI is a PostgreSQL extension that enables AI capabilities directly in the database, featuring automated vector embedding creation, RAG implementation, semantic search, and LLM integration (OpenAI, Claude, Cohere, Llama) with support for high-performance vector operations through pgvector and pgvectorscale.

Links

Specialized Coding Models

AI models optimized for software development

Codestral

Category

INTELLIGENCE LAYER - Specialized Coding Models

Description

Codestral is Mistral AI's specialized 22B code generation model supporting 80+ programming languages, featuring a 32k context window, fill-in-the-middle capabilities, and state-of-the-art performance on coding benchmarks, available through API endpoints and IDE integrations.

Links

Claude 3.5 Sonnet

Category

INTELLIGENCE LAYER - Specialized Coding Models

Description

Claude 3.5 Sonnet is Anthropic's frontier AI model offering state-of-the-art performance in reasoning, coding, and vision tasks, featuring a 200K token context window, computer use capabilities, and enhanced safety measures, available through multiple platforms including Claude.ai and major cloud providers.

Links

Qwen2.5-Coder-32B

Category

INTELLIGENCE LAYER - Specialized Coding Models

Description

Qwen2.5-Coder is a specialized code-focused model matching GPT-4's coding capabilities, featuring 32B parameters, 128K token context window, support for 80+ programming languages, and state-of-the-art performance on coding benchmarks, available as an open-source Apache 2.0 licensed model.

Links

Poolside Malibu

Category

INTELLIGENCE LAYER - Specialized Coding Models

Description

Poolside Malibu is an enterprise-focused code generation model trained using Reinforcement Learning from Code Execution Feedback (RLCEF), featuring 100K token context, custom fine-tuning capabilities, and deep integration with development environments, available through Amazon Bedrock for secure deployment.

Links

INFRASTRUCTURE LAYER

AI Workspaces

Development environments for sandboxing and building AI applications

Daytona

Category

INFRASTRUCTURE LAYER - AI Workspaces

Description

Daytona.io is an open-source Development Environment Manager designed to simplify the setup and management of development environments across various platforms, including local, remote, and cloud infrastructures.

Links

Runloop

Category

INFRASTRUCTURE LAYER - AI Workspaces

Description

Runloop provides a secure, high-performance infrastructure platform that enables developers to build, scale, and deploy AI-powered coding solutions with seamless integration and real-time monitoring capabilities.

Links

E2B

Category

INFRASTRUCTURE LAYER - AI Workspaces

Description

E2B provides an open-source runtime platform that enables developers to securely execute AI-generated code in cloud sandboxes, supporting multiple languages and frameworks for AI-powered development use cases.

Links
Category

INFRASTRUCTURE LAYER - AI Workspaces

Description

Morph Labs provides infrastructure for developing and deploying autonomous software engineers at scale, offering Infinibranch for Morph Cloud and focusing on advanced infrastructure for AI-powered development, backed by partnerships with Together AI, Nomic AI, and other leading AI companies.

Links

Model Access & Deployment

Services for deploying and running AI models

OpenAI

Category

INFRASTRUCTURE LAYER - Model Access & Deployment

Description

OpenAI develops advanced artificial intelligence systems like ChatGPT, GPT-4, and Sora, focusing on creating safe AGI that benefits humanity through products spanning language models, image generation, and video creation while maintaining leadership in AI research and safety.

Links

Anthropic

Category

INFRASTRUCTURE LAYER - Model Access & Deployment

Description

Anthropic provides frontier AI models through the Claude family, emphasizing safety and reliability, with offerings including Claude 3.5 Sonnet and Haiku. Their models feature advanced capabilities in reasoning, coding, and computer use, while maintaining strong safety standards through Constitutional AI and comprehensive testing.

Links

Mistral AI

Category

INFRASTRUCTURE LAYER - Model Access & Deployment

Description

Mistral AI provides frontier AI models with emphasis on openness and portability, offering both open-weight models (Mistral 7B, Mixtral 8x7B) and commercial models (Mistral Large 2), available through multiple deployment options including serverless APIs, cloud services, and on-premise deployment.

Links

Groq

Category

INFRASTRUCTURE LAYER - Model Access & Deployment

Description

Groq provides ultra-fast AI inference infrastructure for openly-available models like Llama 3.1, Mixtral, and Gemma, offering OpenAI-compatible API endpoints with industry-leading speed and simple three-line integration for existing applications.

Links

AI21labs

Category

INFRASTRUCTURE LAYER - Model Access & Deployment

Description

AI21 Labs delivers enterprise-grade generative AI solutions through its Jamba foundation model and RAG engine, enabling organizations to build secure, production-ready AI applications with flexible deployment options and dedicated integration support.

Links

Cohere

Category

INFRASTRUCTURE LAYER - Model Access & Deployment

Description

Cohere provides an enterprise AI platform featuring advanced language models, embedding, and retrieval capabilities that enables businesses to build production-ready AI applications with flexible deployment options across cloud or on-premises environments.

Links

Hugging Face

Category

INFRASTRUCTURE LAYER - Model Access & Deployment

Description

Hugging Face provides fully managed inference infrastructure for ML models with support for multiple hardware options (CPU, GPU, TPU) across various cloud providers, offering autoscaling and dedicated deployments with enterprise-grade security.

Links

Cartesia

Category

INFRASTRUCTURE LAYER - Model Access & Deployment

Description

Cartesia AI delivers real-time multimodal intelligence through state space models that enable fast, private, and offline inference capabilities across devices, offering streaming-first solutions with constant memory usage and low latency.

Links

Fireworks

Category

INFRASTRUCTURE LAYER - Model Access & Deployment

Description

Provides easy access to open-source language models through a simple API, similar to offerings from closed-source providers.

Links

Together.AI

Category

INFRASTRUCTURE LAYER - Model Access & Deployment

Description

Offers an API for accessing and running open-source LLMs, facilitating seamless integration into AI applications.

Links

Google Vertex AI

Category

INFRASTRUCTURE LAYER - Model Access & Deployment

Description

End-to-end platform for deploying and managing AI models, including LLMs, with integrated tools for monitoring, versioning, and scaling.

Links

Amazon Nova

Category

INFRASTRUCTURE LAYER - Model Access & Deployment

Description

Amazon Nova provides state-of-the-art foundation models through Amazon Bedrock, offering multiple model variants (Micro, Lite, Pro, Canvas, Reel) for text, image, and video processing with industry-leading price-performance, fine-tuning capabilities, and enterprise-grade features.

Links

Replicate

Category

INFRASTRUCTURE LAYER - Model Access & Deployment

Description

Serverless platform for running machine learning models, allowing developers to deploy and scale models without managing infrastructure.

Links

SambaNova

Category

INFRASTRUCTURE LAYER - Model Access & Deployment

Description

SambaNova provides custom AI infrastructure featuring their SN40L Reconfigurable Dataflow Unit (RDU), offering world-record inference speeds for large language models, with integrated fine-tuning capabilities and enterprise-grade security, delivered through both cloud and on-premises solutions.

Links

BentoML

Category

INFRASTRUCTURE LAYER - Model Access & Deployment

Description

BentoML provides an open-source unified inference platform that enables organizations to build, deploy, and scale AI systems across any cloud with high performance and flexibility, while offering enterprise features like auto-scaling, rapid iteration, and SOC II compliance.

Links

OpenRouter

Category

INFRASTRUCTURE LAYER - Model Access & Deployment

Description

OpenRouter provides a unified OpenAI-compatible API for accessing 282+ models across multiple providers, offering standardized access, provider routing, and model rankings, with support for multiple SDKs and framework integrations.

Links

Cloud Providers

Computing infrastructure that powers AI systems and their workspaces

AWS

Category

INFRASTRUCTURE LAYER - Cloud Providers

Description
  • No description available
Links

Azure

Category

INFRASTRUCTURE LAYER - Cloud Providers

Description
  • No description available
Links

GCP

Category

INFRASTRUCTURE LAYER - Cloud Providers

Description
  • No description available
Links

Koyeb

Category

INFRASTRUCTURE LAYER - Cloud Providers

Description

Koyeb provides a high-performance serverless platform specifically optimized for AI workloads, offering GPU/NPU infrastructure, global deployment across 50+ locations, and seamless scaling capabilities for ML model inference and training with built-in observability.

Links

CoreWeave

Category

INFRASTRUCTURE LAYER - Cloud Providers

Description

CoreWeave is an AI-focused cloud provider offering Kubernetes-native infrastructure optimized for GPU workloads, featuring 11+ NVIDIA GPU types, up to 35x faster performance and 80% cost reduction compared to traditional providers, with specialized solutions for ML/AI, VFX, and inference at scale.

Links

Nebius

Category

INFRASTRUCTURE LAYER - Cloud Providers

Description

Nebius provides an AI-optimized cloud platform featuring latest NVIDIA GPUs (H200, H100, L40S) with InfiniBand networking, offering managed Kubernetes and Slurm clusters, MLflow integration, and specialized infrastructure for AI training, fine-tuning, and inference workloads.

Links

Contributing

Please read the contribution guidelines before submitting a pull request.

License

This project is licensed under the Apache 2.0 License - see the LICENSE file for details

About

Mapping of tools and technologies that enable AI agent development

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published