e2b-dev / E2B

Secure cloud runtime for AI apps & AI agents. Fully open-source.

Home Page:https://e2b.dev/docs

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Code Interpreter v2.0

mlejva opened this issue · comments

The current version of the code interpreter SDK allows to run Python code but each run has its own separate context. That means that subsequent runs can't reference to variables, definitions, etc from past code execution runs.

This is suboptimal for a lot of Python use cases with LLMs. Especially GPT-3.5 and 4 expects it runs in a Jupyter Notebook environment. Even when ones tries to convince it otherwise. In practice, LLMs will generate code blocks which have references to previous code blocks. This becomes an issue if a user wants to execute each code block separately which often is the use case.

Related #326

What's the status on this?

We have new code interpreter in works and some users are already using it. You can go ahead and start using it here - #326

It'll be soon released on the main branch.

If you give it a try, please share your feedback, it's tremendously helpful for us

@im-calvin let me know if you're asking about anything specific such as a feature that's blocking you

Thanks for the update. I'm trying it out right now, I'm getting this error

RPC Error (-32000): error reading file '/root/.jupyter/kernel_id': open /root/.jupyter/kernel_id: no such file or directory

EDIT: I'm using version 0.12.6-stateful-code-interpreter.8 and calling sandbox.execPython(code) from within Typescript

@im-calvin Can you share the code? Are you using custom sandbox template?

Here's my code & Dockerfile

# e2b.Dockerfile

# You can use most of the Debian-based base images
FROM ubuntu:22.04

# Install the ffmpeg tool/
RUN apt update \
  && apt install -y ffmpeg
const sandbox = await CodeInterpreterV2.create({
  template: "custom-sandbox",
  apiKey: process.env["E2B_API_KEY"],
});

const result = await sandbox.execPython(code);

I added section to the PR about custom templates. If there's anything unclear, let me know :)
#326