jacoblee93 / bicameral-gpt

Bicameral-GPT is an experimental, personalized generative agent trained on journal entries.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Bicameral-GPT: A TypeScript generative agent trained on your journal!

Bicameral-GPT is an experimental, personalized generative agent trained on your journal entries.

Demo conversation with the agent

Bicameral-GPT ingests a set of "core memories" and journal entries to get a sense of your day to day life and how you are affected by events in it. You can then prompt Bicameral-GPT with questions (e.g. Are you a fan of Westworld?) and stimuli (e.g. Your cute neighbor from down the hall invites you to dinner), and Bicameral-GPT will draw on the ingested memories to create responses. It will weigh more recent and impactful experiences more heavily when coming up with responses - after all, what you had for breakfast three weeks ago shouldn't have as much of an impact on your mental state as finding a new job!

Bicameral-GPT can also "summarize" your current traits and status based on your entries, allowing you some LLM-powered insight and introspection into your current mental state and character:

The analysis function

What you'll need

Quickstart

Bicameral-GPT uses the LangChainJS implementation of generative agents, as well as the Notion document loader and Supabase vector store.

  1. Copy the .env.example file into a .env file.
  2. Follow these instructions and create a Notion integration with access to a page in your workspace. The required peer dependencies are already required in this repo, so you can skip that step. Populate your .env file's NOTION_INTEGRATION_TOKEN with your integration token.
  3. Populate a Notion page with a few journal entries. We recommend you use a structure where each new entry is a subpage within the main page, and the title is the day in parseable format: Example Notion journal page format If the title of a subpage is not a parseable date, Bicameral-GPT will fall back to using the date the subpage was created, which may not align with the journal entry's true date. Populate the NOTION_PAGE_ID variable in your .env file with your journal page.
  4. Create a new Supabase instance and follow these instructions to set up a table for your stored documents. Populate the SUPABASE_PRIVATE_KEY and SUPABASE_URL variables in your .env file appropriately.
  5. Fill in the OPENAI_API_KEY variable with your OpenAI key.
  6. Populate the remaining environment variables for AGENT_CORE_TRAITS, AGENT_NAME, AGENT_STATUS, and optionally AGENT_AGE.
  7. Open scripts/ingest.ts and replace CORE_MEMORIES at the top with some personalized core memories and traits you'd like your agent to have.
  8. Run yarn install to install the required dependencies.
  9. Run yarn ingest to load your your core memories and journal entries from Notion. When this is complete, you'll see your agent's current status.
  10. Run yarn dev to start the NextJS app.
  11. Go to localhost:3000 to start asking your agent questions and prodding it with stimuli!

At present, your agent will respond best to standalone questions and isn't the best at conversations.

Latency with GPT-4 is presently around 20-30 seconds per question depending on how many memories your agent has. You can experiment with faster, cheaper models as well.

Ingesting new memories

yarn ingest is idempotent based on the text of your core memories and the Notion page id of your journal entries, so to keep your agent up to date, you can simply rerun the command.

Generative agents can "form" new memories and even acquire new traits based on the conversations you have with it and the stimuli you prod it with. However, by default, ingesting new memories will clear these generated memories, leaving your agent only with state from your journal entries and core memories. This is mainly for consistency, to keep introspection more accurate, and to keep focus on the most relevant memories. If you would like to change this behavior, you can comment out the applicable lines in scripts/ingest.ts.

Clearing your agent

The only state required for your agent to run is in your created Supabase table. If you'd like to reset your agent, you can clear the table from your Supabase console, or run yarn wipe as a shortcut.

How does it work?

Adding memories

The core of your agent's state is a vector store that stores individual memories. The agent assigns ingested memories a normalized importance score via LLM, and also keeps track of when each memory was added and last accessed.

Here's an example trace of what this looks like:

https://smith.langchain.com/public/7eeed3e6-9c1c-41c1-a5bf-90ac63050671/r

On certain thresholds, the agent will also reflect on its memories, extracting the most important themes and attempting to draw insights from its experiences. It will then add these synthesized insights as new memories, reinforcing the agent's most important traits. This attempts to simulate the similar human subconscious process.

Here's an example trace of adding a memory that triggers a reflection step:

https://smith.langchain.com/public/1ab6bdfc-c6c8-47f1-b0d8-6a165b0210eb/r

Generating responses

When responding to inputs, your agent performs a few tasks. Roughly, it:

  1. Creates an overview of its current state based on its most relevant memories and recent observations (or uses a cached value).
  2. Extracts the most relevant entity from the input.
  3. Extracts the relevant action the entity is doing.
  4. Attempts to determine the relationship between the agent's persona and the entity.
  5. Uses the current state and the retrieved information to formulate a response.

Here's an example trace of what this looks like:

https://smith.langchain.com/public/fb33a0eb-34a0-49c0-b55c-726068f55fb1/r

The agent will also store inputs and the generated responses as memories which can be referenced later. For example, asking the agent to react to News that a hostile alien invasion is approaching Earth may make the agent more stressed or worried in its future responses.

Other tips

Try to keep individual journal entries relatively brief and limit them to the most impactful moments. Include your reactions to them and how they made you feel.

Acknowledgements

This was heavily inspired by the work of Joon Sung Park et al. on generative agents.

About

Bicameral-GPT is an experimental, personalized generative agent trained on journal entries.

License:MIT License


Languages

Language:TypeScript 93.6%Language:JavaScript 3.2%Language:CSS 3.2%