microsoft / kernel-memory

RAG architecture: index and query any data using LLM and natural language, track sources, show citations, asynchronous memory patterns.

Home Page:https://microsoft.github.io/kernel-memory

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Using the Semantic Kernel plugin raises the error "System.Collections.Generic.KeyNotFoundException: 'The given key 'url' was not present in the dictionary.'"

qmatteoq opened this issue · comments

I have two projects:

  • One that, using Kernel Memory, stores documents as embeddings into Azure AI Search
  • One, based on Semantic Kernel, that I want to use to ask questions about documents that I have stored using the first project

I don't know if the problem is because I'm using the RC version of Semantic Kernel, which is a bit unstable these days (more precisely, RC3), but as soon as the Ask function from the Semantic Kernel plugin is invoked, the operation always throws the following exception:

System.Collections.Generic.KeyNotFoundException:` 'The given key 'url' was not present in the dictionary.'

This is the code I'm using:

string apiKey = configuration["AzureOpenAI:ApiKey"];
string deploymentChatName = configuration["AzureOpenAI:DeploymentChatName"];
string deploymentEmbeddingName = configuration["AzureOpenAI:DeploymentEmbeddingName"];
string endpoint = configuration["AzureOpenAI:Endpoint"];

string searchApiKey = configuration["AzureSearch:ApiKey"];
string searchEndpoint = configuration["AzureSearch:Endpoint"];

var embeddingConfig = new AzureOpenAIConfig
{
    APIKey = apiKey,
    Deployment = deploymentEmbeddingName,
    Endpoint = endpoint,
    APIType = AzureOpenAIConfig.APITypes.EmbeddingGeneration,
    Auth = AzureOpenAIConfig.AuthTypes.APIKey
};

var chatConfig = new AzureOpenAIConfig
{
    APIKey = apiKey,
    Deployment = deploymentChatName,
    Endpoint = endpoint,
    APIType = AzureOpenAIConfig.APITypes.ChatCompletion,
    Auth = AzureOpenAIConfig.AuthTypes.APIKey
};

var kernelMemory = new KernelMemoryBuilder()
    .WithAzureOpenAITextGeneration(chatConfig)
    .WithAzureOpenAITextEmbeddingGeneration(embeddingConfig)
    .WithAzureAISearch(searchEndpoint, searchApiKey)
    .Build();

var kernel = new KernelBuilder()
    .AddAzureOpenAIChatCompletion(deploymentChatName, deploymentChatName, endpoint, apiKey)
    .Build();

var plugin = new MemoryPlugin(kernelMemory, waitForIngestionToComplete: true);
kernel.ImportPluginFromObject(plugin, "memory");

var prompt = @"
            Question to Kernel Memory: What is Contoso Electronics?

            Kernel Memory Answer: {{memory.ask What is Contoso Electronics?}}

            If the answer is empty say 'I don't know' otherwise reply with a preview of the answer, truncated to 15 words.
            ";

OpenAIPromptExecutionSettings settings = new()
{
    FunctionCallBehavior = FunctionCallBehavior.EnableKernelFunctions,
};

var chatHistory = new ChatHistory();
chatHistory.AddUserMessage(prompt);

var chatCompletionService = kernel.GetRequiredService<IChatCompletionService>();
var result = await chatCompletionService.GetChatMessageContentAsync(chatHistory, settings, kernel);

//as long as the content is null, it means that the chat completion service is waiting for a function call to be processed
var functionCall = ((OpenAIChatMessageContent)result).GetOpenAIFunctionResponse();
while (functionCall != null)
{
    KernelFunction pluginFunction;
    KernelArguments arguments;
    kernel.Plugins.TryGetFunctionAndArguments(functionCall, out pluginFunction, out arguments);
    var functionResult = await kernel.InvokeAsync(pluginFunction!, arguments!);
    var jsonResponse = functionResult.GetValue<string>();
    chatHistory.AddFunctionMessage(jsonResponse, pluginFunction.Name);

    result = await chatCompletionService.GetChatMessageContentAsync(chatHistory, settings, kernel);

    //as long as the content is null, it means that the chat completion service is waiting for a function call to be processed
    functionCall = ((OpenAIChatMessageContent)result).GetOpenAIFunctionResponse();
}

Console.WriteLine(result.Content);
Console.ReadLine();

The problem is triggered by this line of code inside the loop:

 var functionResult = await kernel.InvokeAsync(pluginFunction!, arguments!);

I can see that Semantic Kernel has properly figured out that it need to use the Ask function from the Kernel Memory plugin and that the argument to pass is the question in my prompt "What is Contoso Electronics?". However, when the function is called through InvokeAsync(), I always get that exception.

As a reference, I used manual function calling with EnableKernelFunctions to better diagnose what's going on. I can simplify the code this way, but I'm getting the same exception:

var prompt = @"
            Question to Kernel Memory: {{$input}}

            Kernel Memory Answer: {{memory.ask $input}}

            If the answer is empty say 'I don't know' otherwise reply with a preview of the answer, truncated to 15 words.
            ";

OpenAIPromptExecutionSettings settings = new()
{
    FunctionCallBehavior = FunctionCallBehavior.AutoInvokeKernelFunctions,
};

KernelArguments arguments = new KernelArguments(settings)
{
    { "input", "What is Contoso Electronics?" },
};

var response = await kernel.InvokePromptAsync(prompt, arguments);

Console.WriteLine(response.GetValue<string>());
Console.ReadLine();

If it's a problem connected to the RC status of Semantic Kernel no worries, but I wanted to open an issue to track it and make sure that it doesn't slip once 1.0 gets released 😊

Package versions:

<PackageReference Include="Microsoft.KernelMemory.Core" Version="0.21.231214.1" />
<PackageReference Include="Microsoft.KernelMemory.SemanticKernelPlugin" Version="0.21.231214.1" />
<PackageReference Include="Microsoft.SemanticKernel" Version="1.0.0-rc3" />

Thank you @qmatteoq - I believe this might be connected to #205 and both should be addressed next week when SK v1.0 is released.

Could you double check

I created a new empty project, copied and ran your code and I see the output below, everything seems to work fine.

info: Microsoft.KernelMemory.Handlers.TextExtractionHandler[0]
      Handler 'extract' ready
info: Microsoft.KernelMemory.Handlers.TextPartitioningHandler[0]
      Handler 'partition' ready
info: Microsoft.KernelMemory.Handlers.SummarizationHandler[0]
      Handler 'summarize' ready
info: Microsoft.KernelMemory.Handlers.GenerateEmbeddingsHandler[0]
      Handler 'gen_embeddings' ready, 1 embedding generators
info: Microsoft.KernelMemory.Handlers.SaveRecordsHandler[0]
      Handler save_records ready, 1 vector storages
info: Microsoft.KernelMemory.Handlers.DeleteDocumentHandler[0]
      Handler 'private_delete_document' ready
info: Microsoft.KernelMemory.Handlers.DeleteIndexHandler[0]
      Handler 'private_delete_index' ready
info: Microsoft.KernelMemory.Handlers.DeleteGeneratedFilesHandler[0]
      Handler 'delete_generated_files' ready
info: Microsoft.KernelMemory.Pipeline.BaseOrchestrator[0]
      Queueing upload of 1 files for further processing [request ce01]
info: Microsoft.KernelMemory.Pipeline.BaseOrchestrator[0]
      File uploaded: content.txt, 449 bytes
info: Microsoft.KernelMemory.Pipeline.BaseOrchestrator[0]
      Handler 'extract' processed pipeline 'default/ce01' successfully
info: Microsoft.KernelMemory.Pipeline.BaseOrchestrator[0]
      Handler 'partition' processed pipeline 'default/ce01' successfully
info: Microsoft.KernelMemory.Pipeline.BaseOrchestrator[0]
      Handler 'gen_embeddings' processed pipeline 'default/ce01' successfully
info: Microsoft.KernelMemory.Pipeline.BaseOrchestrator[0]
      Handler 'save_records' processed pipeline 'default/ce01' successfully
info: Microsoft.KernelMemory.Pipeline.BaseOrchestrator[0]
      Pipeline 'default/ce01' complete
Contoso Electronics is a cutting-edge technology company that specializes in designing and manufacturing consumer electronics and smart devices. They are known for their innovative and sleek product lineup.

Project:

<Project Sdk="Microsoft.NET.Sdk">

    <PropertyGroup>
        <OutputType>Exe</OutputType>
        <TargetFramework>net6.0</TargetFramework>
        <ImplicitUsings>enable</ImplicitUsings>
        <Nullable>enable</Nullable>
        <ManagePackageVersionsCentrally>false</ManagePackageVersionsCentrally>
        <NoWarn>CA2007</NoWarn>
    </PropertyGroup>

    <ItemGroup>
      <None Remove=".env" />
      <Content Include=".env">
        <CopyToOutputDirectory>Always</CopyToOutputDirectory>
      </Content>
    </ItemGroup>

    <ItemGroup>
      <PackageReference Include="Microsoft.KernelMemory.Core" Version="0.21.231214.1"/>
      <PackageReference Include="Microsoft.KernelMemory.SemanticKernelPlugin" Version="0.21.231214.1"/>
      <PackageReference Include="Microsoft.SemanticKernel" Version="1.0.0-rc3"/>
      <PackageReference Include="dotenv.net" Version="3.1.3"/>
    </ItemGroup>

</Project>

Code:

// Copyright (c) Microsoft. All rights reserved.

using Microsoft.KernelMemory;
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.AI.ChatCompletion;
using Microsoft.SemanticKernel.Connectors.AI.OpenAI;

// =============
// == SETUP
// =============

dotenv.net.DotEnv.Load();
var env = dotenv.net.DotEnv.Read();

var embeddingConfig = new AzureOpenAIConfig
{
    Endpoint = env["AZURE_OPENAI_EMBEDDING_ENDPOINT"],
    APIKey = env["AZURE_OPENAI_EMBEDDING_API_KEY"],
    Deployment = env["AZURE_OPENAI_EMBEDDING_DEPLOYMENT"],
    APIType = AzureOpenAIConfig.APITypes.EmbeddingGeneration,
    Auth = AzureOpenAIConfig.AuthTypes.APIKey
};

var chatConfig = new AzureOpenAIConfig
{
    Endpoint = env["AZURE_OPENAI_CHAT_ENDPOINT"],
    APIKey = env["AZURE_OPENAI_CHAT_API_KEY"],
    Deployment = env["AZURE_OPENAI_CHAT_DEPLOYMENT"],
    APIType = AzureOpenAIConfig.APITypes.ChatCompletion,
    Auth = AzureOpenAIConfig.AuthTypes.APIKey
};

var kernelMemory = new KernelMemoryBuilder()
    .WithAzureOpenAITextGeneration(chatConfig)
    .WithAzureOpenAITextEmbeddingGeneration(embeddingConfig)
    .WithAzureAISearch(env["AZURE_SEARCH_ENDPOINT"], env["AZURE_SEARCH_API_KEY"])
    .Build();

var kernel = new KernelBuilder()
    .AddAzureOpenAIChatCompletion(chatConfig.Deployment, chatConfig.Deployment, chatConfig.Endpoint, chatConfig.APIKey)
    .Build();

MemoryPlugin plugin = new MemoryPlugin(kernelMemory, waitForIngestionToComplete: true);

// ======================
// == IMPORT DATA
// ======================

await plugin.SaveAsync(
    "Contoso Electronics is a cutting-edge technology company at the forefront of innovation in the electronics industry. " +
    "Established in the visionary year of 2035, Contoso Electronics has rapidly become a global leader in designing and manufacturing state-of-the-art consumer electronics and smart devices. " +
    "With a mission to simplify and enhance daily life through technology, Contoso Electronics is renowned for its sleek and futuristic product lineup.",
    documentId: "ce01");

// ==============================
// == QUERY MEMORY WITH PLUGIN
// ==============================

kernel.ImportPluginFromObject(plugin, "memory");

var prompt = @"
            Question to Kernel Memory: What is Contoso Electronics?

            Kernel Memory Answer: {{memory.ask What is Contoso Electronics?}}

            If the answer is empty say 'I don't know' otherwise reply with a preview of the answer, truncated to 15 words.
            ";

OpenAIPromptExecutionSettings settings = new()
{
    FunctionCallBehavior = FunctionCallBehavior.EnableKernelFunctions,
};

var chatHistory = new ChatHistory();
chatHistory.AddUserMessage(prompt);

var chatCompletionService = kernel.GetRequiredService<IChatCompletionService>();
var result = await chatCompletionService.GetChatMessageContentAsync(chatHistory, settings, kernel);

//as long as the content is null, it means that the chat completion service is waiting for a function call to be processed
var functionCall = ((OpenAIChatMessageContent)result).GetOpenAIFunctionResponse();
while (functionCall != null)
{
    KernelFunction pluginFunction;
    KernelArguments arguments;
    kernel.Plugins.TryGetFunctionAndArguments(functionCall, out pluginFunction, out arguments);
    var functionResult = await kernel.InvokeAsync(pluginFunction!, arguments!);
    var jsonResponse = functionResult.GetValue<string>();
    chatHistory.AddFunctionMessage(jsonResponse, pluginFunction.Name);

    result = await chatCompletionService.GetChatMessageContentAsync(chatHistory, settings, kernel);

    //as long as the content is null, it means that the chat completion service is waiting for a function call to be processed
    functionCall = ((OpenAIChatMessageContent)result).GetOpenAIFunctionResponse();
}

Console.WriteLine(result.Content);
Console.ReadLine();

.env

MEMORY_API_KEY=...

AZURE_OPENAI_CHAT_ENDPOINT=https://....openai.azure.com/
AZURE_OPENAI_CHAT_API_KEY=...
AZURE_OPENAI_CHAT_DEPLOYMENT=gpt-35-turbo-16k

AZURE_OPENAI_EMBEDDING_ENDPOINT=https://....openai.azure.com/
AZURE_OPENAI_EMBEDDING_API_KEY=...
AZURE_OPENAI_EMBEDDING_DEPLOYMENT=text-embedding-ada-002

AZURE_SEARCH_ENDPOINT=https://....search.windows.net
AZURE_SEARCH_API_KEY=...

Thanks a lot @dluc and sorry for wasting your time! I figured out now what happened. My vector index was created with another web application, always based on Kernel Memory, but a quite older version (0.15). I've upgraded it to the latest version (matching this way the version I'm using in my console app with Semantic Kernel), I've recreated the index by reuploading the Contoso Electronics document and now everything works.

Thanks a ton!

no problem, I'll take the blame for the many backward incompatible changes :-) Glad all is working fine!