Down below you can find the code snippets that demonstrate the usage of many Semantic Kernel features.
You can run those tests using the IDE or the command line. To run the tests using the command line run the following command from the root of Concepts project:
dotnet test -l "console;verbosity=detailed" --filter "FullyQualifiedName=NameSpace.TestClass.TestMethod"
Example for ChatCompletion/OpenAI_ChatCompletion.cs
file, targeting the ChatPromptSync
test:
dotnet test -l "console;verbosity=detailed" --filter "FullyQualifiedName=ChatCompletion.OpenAI_ChatCompletion.ChatPromptSync"
Agents - Different ways of using Agents
- ComplexChat_NestedShopper
- Legacy_AgentAuthoring
- Legacy_AgentCharts
- Legacy_AgentCollaboration
- Legacy_AgentDelegation
- Legacy_AgentTools
- Legacy_Agents
- Legacy_ChatCompletionAgent
- MixedChat_Agents
- OpenAIAssistant_ChartMaker
- OpenAIAssistant_CodeInterpreter
- OpenAIAssistant_Retrieval
AudioToText - Different ways of using AudioToText
services to extract text from audio
- Gemini_FunctionCalling
- FunctionCalling
- NexusRaven_HuggingFaceTextGeneration
- MultipleFunctionsVsParameters
ChatCompletion - Examples using ChatCompletion
messaging capable service with models
- AzureAIInference_ChatCompletion
- AzureAIInference_ChatCompletionStreaming
- AzureOpenAI_ChatCompletion
- AzureOpenAI_ChatCompletionStreaming
- AzureOpenAI_CustomClient
- AzureOpenAIWithData_ChatCompletion
- ChatHistoryAuthorName
- ChatHistoryInFunctions
- ChatHistorySerialization
- Connectors_CustomHttpClient
- Connectors_KernelStreaming
- Connectors_WithMultipleLLMs
- Google_GeminiChatCompletion
- Google_GeminiChatCompletionStreaming
- Google_GeminiGetModelResult
- Google_GeminiVision
- OpenAI_ChatCompletion
- OpenAI_ChatCompletionStreaming
- OpenAI_ChatCompletionWithVision
- OpenAI_CustomClient
- OpenAI_UsingLogitBias
- OpenAI_FunctionCalling
- OpenAI_ReasonedFunctionCalling
- MultipleProviders_ChatHistoryReducer
- MistralAI_ChatPrompt
- MistralAI_FunctionCalling
- MistralAI_StreamingFunctionCalling
- Onnx_ChatCompletion
- Onnx_ChatCompletionStreaming
- Ollama_ChatCompletion
- Ollama_ChatCompletionStreaming
- AutoFunctionInvocationFiltering
- FunctionInvocationFiltering
- Legacy_KernelHooks
- MaxTokensWithFilters
- PIIDetectionWithFilters
- PromptRenderFiltering
- RetryWithFilters
- TelemetryWithFilters
- Arguments
- FunctionResult_Metadata
- FunctionResult_StronglyTyped
- MethodFunctions
- MethodFunctions_Advanced
- MethodFunctions_Types
- MethodFunctions_Yaml
- PromptFunctions_Inline
- PromptFunctions_MultipleArguments
ImageToText - Using ImageToText
services to describe images
Memory - Using AI Memory
concepts
- OpenAI_EmbeddingGeneration
- Ollama_EmbeddingGeneration
- Onnx_EmbeddingGeneration
- HuggingFace_EmbeddingGeneration
- MemoryStore_CustomReadOnly
- SemanticTextMemory_Building
- TextChunkerUsage
- TextChunkingAndEmbedding
- TextMemoryPlugin_GeminiEmbeddingGeneration
- TextMemoryPlugin_MultipleMemoryStore
- TextMemoryPlugin_RecallJsonSerializationWithOptions
- VectorStore_DataIngestion_Simple: A simple example of how to do data ingestion into a vector store when getting started.
- VectorStore_DataIngestion_MultiStore: An example of data ingestion that uses the same code to ingest into multiple vector stores types.
- VectorStore_DataIngestion_CustomMapper: An example that shows how to use a custom mapper for when your data model and storage model doesn't match.
- VectorStore_GenericDataModel_Interop: An example that shows how you can use the built-in, generic data model from Semantic Kernel to read and write to a Vector Store.
- VectorStore_ConsumeFromMemoryStore_AzureAISearch: An example that shows how you can use the AzureAISearchVectorStore to consume data that was ingested using the AzureAISearchMemoryStore.
- VectorStore_ConsumeFromMemoryStore_Qdrant: An example that shows how you can use the QdrantVectorStore to consume data that was ingested using the QdrantMemoryStore.
- VectorStore_ConsumeFromMemoryStore_Redis: An example that shows how you can use the RedisVectorStore to consume data that was ingested using the RedisMemoryStore.
- VectorStore_MigrateFromMemoryStore_Redis: An example that shows how you can use the RedisMemoryStore and RedisVectorStore to migrate data to a new schema.
- VectorStore_Langchain_Interop: An example that shows how you can use various Vector Store to consume data that was ingested using Langchain.
Plugins - Different ways of creating and using Plugins
- ApiManifestBasedPlugins
- ConversationSummaryPlugin
- CreatePluginFromOpenApiSpec_Github
- CreatePluginFromOpenApiSpec_Jira
- CreatePluginFromOpenApiSpec_Klarna
- CreatePluginFromOpenApiSpec_RepairService
- OpenApiPlugin_PayloadHandling
- OpenApiPlugin_CustomHttpContentReader
- OpenApiPlugin_Customization
- OpenApiPlugin_Filtering
- OpenApiPlugin_Telemetry
- CustomMutablePlugin
- DescribeAllPluginsAndFunctions
- GroundednessChecks
- ImportPluginFromGrpc
- TransformPlugin
- CopilotAgentBasedPlugins
PromptTemplates - Using Templates
with parametrization for Prompt
rendering
- ChatCompletionPrompts
- ChatWithPrompts
- HandlebarsPrompts
- LiquidPrompts
- MultiplePromptTemplates
- PromptFunctionsWithChatGPT
- TemplateLanguage
- PromptyFunction
- HandlebarsVisionPrompts
- SafeChatPrompts
- ChatLoopWithPrompt
TextGeneration - TextGeneration
capable service with models
TextToAudio - Using TextToAudio
services to generate audio
TextToImage - Using TextToImage
services to generate images
Concept samples will require secrets and credentials, to access OpenAI, Azure OpenAI, Bing and other resources.
We suggest using .NET Secret Manager to avoid the risk of leaking secrets into the repository, branches and pull requests. You can also use environment variables if you prefer.
To set your secrets with Secret Manager:
cd dotnet/src/samples/Concepts
dotnet user-secrets init
dotnet user-secrets set "OpenAI:ServiceId" "gpt-3.5-turbo-instruct"
dotnet user-secrets set "OpenAI:ModelId" "gpt-3.5-turbo-instruct"
dotnet user-secrets set "OpenAI:ChatModelId" "gpt-4"
dotnet user-secrets set "OpenAI:ApiKey" "..."
...
- Create a
appsettings.Development.json
file next to theConcepts.csproj
file. This file will be ignored by git, the content will not end up in pull requests, so it's safe for personal settings. Keep the file safe. - Edit
appsettings.Development.json
and set the appropriate configuration for the samples you are running.
For example:
{
"OpenAI": {
"ServiceId": "gpt-3.5-turbo-instruct",
"ModelId": "gpt-3.5-turbo-instruct",
"ChatModelId": "gpt-4",
"ApiKey": "sk-...."
},
"AzureOpenAI": {
"ServiceId": "azure-gpt-35-turbo-instruct",
"DeploymentName": "gpt-35-turbo-instruct",
"ChatDeploymentName": "gpt-4",
"Endpoint": "https://contoso.openai.azure.com/",
"ApiKey": "...."
},
// etc.
}
You may also set the settings in your environment variables. The environment variables will override the settings in the appsettings.Development.json
file.
When setting environment variables, use a double underscore (i.e. "__") to delineate between parent and child properties. For example:
-
bash:
export OpenAI__ApiKey="sk-...." export AzureOpenAI__ApiKey="...." export AzureOpenAI__DeploymentName="gpt-35-turbo-instruct" export AzureOpenAI__ChatDeploymentName="gpt-4" export AzureOpenAIEmbeddings__DeploymentName="azure-text-embedding-ada-002" export AzureOpenAI__Endpoint="https://contoso.openai.azure.com/" export HuggingFace__ApiKey="...." export Bing__ApiKey="...." export Postgres__ConnectionString="...."
-
PowerShell:
$env:OpenAI__ApiKey = "sk-...." $env:AzureOpenAI__ApiKey = "...." $env:AzureOpenAI__DeploymentName = "gpt-35-turbo-instruct" $env:AzureOpenAI__ChatDeploymentName = "gpt-4" $env:AzureOpenAIEmbeddings__DeploymentName = "azure-text-embedding-ada-002" $env:AzureOpenAI__Endpoint = "https://contoso.openai.azure.com/" $env:HuggingFace__ApiKey = "...." $env:Bing__ApiKey = "...." $env:Postgres__ConnectionString = "...."