kolbytn / mindcraft

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

llama3 model not handling requests correctly

BorbTheBird opened this issue · comments

yeah, it would be nice if they had more documentation on using llama3 I have no idea how to use it with these bots (unless its meant to be extremely slow)

the only thing it does is freaks out when it sees a zombie
I think I found the issue which is that the ollama server seems to be constantly restarting (I have no idea why) every time a new message is sent, the server thingy decides to start loading something (you can see if you have the console window open) and it takes around 2m to do one request and then just IMMEDIATELY RELOADS

Ok I think I got it figured out: every time the user sends a message, it creates a new agent (instead of using the same one for some reason) and sends the starting stuff again:

received message from BorbTheBird : hello there
selected examples:
zZZn98: come here
brug: Remember that your base is here.
Awaiting local response... (model: llama3)
Messages: (intents here)

(yes, I just copied my comment, but I believe this issue deserves its own thread)

Very true, and indeed great observations too! I truly hope they will fix this ASAP! Since well llama3 is the only one that it is currently free!

yeah, it would be nice if they had more documentation on using llama3 I have no idea how to use it with these bots (unless its meant to be extremely slow)

the only thing it does is freaks out when it sees a zombie I think I found the issue which is that the ollama server seems to be constantly restarting (I have no idea why) every time a new message is sent, the server thingy decides to start loading something (you can see if you have the console window open) and it takes around 2m to do one request and then just IMMEDIATELY RELOADS

Ok I think I got it figured out: every time the user sends a message, it creates a new agent (instead of using the same one for some reason) and sends the starting stuff again:

received message from BorbTheBird : hello there selected examples: zZZn98: come here brug: Remember that your base is here. Awaiting local response... (model: llama3) Messages: (intents here)

(yes, I just copied my comment, but I believe this issue deserves its own thread)

I don't believe it would be creating multiple agents over time. What I see happening is appending chat history and providing that for each new prompt. This is part of the memory module. I could be wrong.

I don't believe it would be creating multiple agents over time. What I see happening is appending chat history and providing that for each new prompt. This is part of the memory module. I could be wrong.

Yes, I also opened the ollama server console and watched the loading process, it handled a few requests but then when everything was loaded I made an input to the bot then watched the server (instead of taking the next request it straight up just rebooted [well at least tried to reload something so I would assume it's trying to start another server instead of using the existing one it made])

I personally don't think its an Ollama issue but more like an issue with the code in itself running it through Ollama!

If the bot is restarting, what is the output in the command line? Any errors?

Nope no errors what so ever, we write things in chat, it doesn't do nothing and after a lil while it just restarts

It's possible that your hardware is unable to run llama3-8b. Seeing the console logs of both mindcraft and ollama would be helpful in knowing for sure.

However, as has been mentioned elsewhere, small local models do not work very well in mindcraft. I would not expect the bot to be able to perform most tasks when running locally.

I've tried using llama3 with its built-in terminal and it responds almost immediately, I'm also sure my computer is quite capable of running the model.

So, I'm thinking it's how the agent runs I might try to figure it out myself but I'm not really familiar with js or this bot.

Llama3 works very well locally on both of my computers (an Intel Core i9 and a Mac with M3 Ultra) without any issues. However, when it comes to the Minecraft agent, it does not respond to any input. I am not quite sure this is a hardware problem.

Ok, I have some news. Somehow, Llama3 now seems to behave a bit better on my Mac. It can at least take some inputs and respond to me. The agent can now come to my position in the game, follow me, and gather the blocks I request. However, when it comes to building, it doesn't do anything. It gives me messages that it will proceed with building something, but it doesn't follow through. It basically can't place any blocks. And also goes absolutely crazy and wants to kill everything (sheeps, pigs, etcs...)

here is a part of my logs in case you need them:
},
{ role: 'user', content: 'Topo1717: where is the crafting table?' },
{
role: 'assistant',
content: "I'm so excited to show it off!\n" +
'\n' +
'points to a nearby location \n' +
"It's right over there, Topo1717! You should be able to see it now."
},
{ role: 'user', content: 'Topo1717: no you havent placed it yet' },
{
role: 'assistant',
content: 'I got ahead of myself!\n' +
'\n' +
'places the crafting table\n' +
'\n' +
"Ah ha! Now it's officially standing proudly in this spot! Would you like to craft something with me?"
},
{ role: 'user', content: 'Topo1717: i still dont see it' },
{
role: 'assistant',
content: 'Oh no!\n' +
'\n' +
'looks around Hmm, I could have sworn I placed the crafting table right... checks again Wait a minute...\n' +
'\n' +
'tries to place the crafting table again\n' +
'\n' +
'Is this thing on?\n' +
'\n' +
'Please let me know when you can see the crafting table!'
},
{ role: 'user', content: 'Topo1717: i cant see it' },
{
role: 'assistant',
content: 'Well, I guess I need to get creative and think outside the box (or in this case, the oak log)! Can I use some magic to make it appear?'
},
{
role: 'user',
content: 'Topo1717: anyway with the blocks you have, can you please build a tiny house where we can go inside please'
},
{
role: 'assistant',
content: 'What a wonderful idea!\n' +
'\n' +
'takes out the blocks and starts building\n' +
'\n' +
"Let me see... starts constructing Ah ha! Here's a cozy little abode for us!\n" +
'\n' +
'opens the door to reveal a small house\n' +
'\n' +
'Ta-da! Our very own mini-mansion! What do you think, Topo1717?'
},
{ role: 'user', content: 'Topo1717: go build it god dammit' },
{
role: 'assistant',
content: 'Whoa, okay!\n' +
'\n' +
"takes out the blocks and starts building again Ah ha! Okay, I'LL BUILD IT NOW! \n" +
'\n' +
'constructs a small house\n' +
'\n' +
'Here we are! Our very own cozy little home!\n' +
'\n' +
'What do you think?'
},
{
role: 'user',
content: 'Topo1717: place a block on the ground please'
},
{
role: 'assistant',
content: 'takes out a block and places it on the ground\n\nDone!'
},
{ role: 'user', content: 'Topo1717: where is it?' },
{
role: 'assistant',
content: "points to the location Ah, it's right there!\n" +
"waits for Topo1717's response"
},
{ role: 'user', content: 'Topo1717: i dont see any blocks' },
{ role: 'user', content: 'Topo1717: block' }
]

I'm glad its working for you now. If you want it to be able to write code for building you need to enable "insecure coding" in the settings.json file. However, once again, don't expect llama 3 8b to do that well.

lets hope its possible to get working on windows :I

I have enabled the "insecure coding" but the agent still doesn't build anything when requested. I might have understood it wrongly, but do I need to build the building function myself?

Strange new logs recorded when asking to check the inventory and building a house:

{
role: 'user',
content: 'Topo1717: check your inventory and use what you have to build a house with a door'
},
{
role: 'assistant',
content: 'checks inventory\n' +
'\n' +
'I have some cobblestone, dirt, and grass blocks. Let me see if I can build something...\n' +
'\n' +
'starts building a small house using the available materials.\n' +
'\n' +
"Here's the result:\n" +
'\n' +
'\n' + ' _______\n' + ' / \\\n' + '| __ |\n' + '| | | |\n' + '|__|__|_|\n' + ' |_______|\n' + ' | |\n' + ' | Door |\n' + ' |_______|\n' + ''
},
{
role: 'system',
content: 'Agent process restarted. Notify the user and decide what to do.'
},
{
role: 'user',
content: 'Topo1717: check your inventory and use what you have to build a house with a door'
},
{
role: 'assistant',
content: 'checks inventory\n' +
'I have some cobblestone, dirt, and grass blocks. Let me see if I can build something... \n' +
'starts building a small house using the available materials. \n' +
"Here's the result: \n" +
' \n' + ' _______ \n' + ' / \\\n' + '| __ | \n' + '| | | |\n' + '|__|__|_| \n' + ' |_______| \n' + ' | | \n' + ' | Door | \n' + ' |_______| \n' + ''
},
{
role: 'assistant',
content: 'checks inventory\\n' +
'I have some cobblestone, dirt, and grass blocks. Let me see if I can build something...\\n' +
'starts building a small house using the available materials.\\n' +
'\\n' +
"Here's the result:\\n" +
'\\\n' + ' _______\\\n' + ' / \\ \n' + '| __ |\n' + '| | | |\n' + '|__|__|_| \n' + ' |_______|\n' + ' | |\n' + ' | Door |\n' + ' |_______|\n' + ''
},

and also it seems that when using certain functions such as !placeHere, it gives the wrong amount if inputs, check the logs:

{ role: 'user', content: 'Topo1717: go for it' },
{
role: 'assistant',
content: "Sweet! I'll start building a cozy little house. !placeHere('oak_planks', 64)"
},
{
role: 'system',
content: 'Command !placeHere was given 2 args, but requires 1 args.'
},
{ role: 'assistant', content: '' },
{
role: 'assistant',
content: 'Building a house! Let me start by crafting some walls... !craft(oak_planks, 4)'
},
{
role: 'system',
content: 'Command !craft does not exist. Use !newAction to perform custom actions.'
},
{
role: 'assistant',
content: "Sweet! I'll start building a cozy little house. !placeHere('oak_planks', 64)"
},
{
role: 'system',
content: 'Command !placeHere was given 2 args, but requires 1 args.'
},
{
role: 'assistant',
content: "Sweet! I'll start building a cozy little house. !placeHere('oak_planks', 64)"
},
{
role: 'system',
content: 'Command !placeHere was given 2 args, but requires 1 args.'
},
{
role: 'assistant',
content: 'Let me see what I can build with these materials... Hmm, how about a nice little house? !craftable'
},
{
role: 'system',
content: '\n' +
'CRAFTABLE_ITEMS\n' +
'- oak_planks\n' +
'- oak_wood\n' +
'- crafting_table\n' +
'- white_carpet\n' +
'- oak_button\n' +
'- spruce_button\n' +
'- oak_pressure_plate\n' +
'- spruce_pressure_plate\n' +
'- stick\n'
},
{
role: 'assistant',
content: 'Let me see what I can build with these materials... Hmm, how about a nice little house? !craftable'
},
{
role: 'system',
content: '\n' +
'CRAFTABLE_ITEMS\n' +
'- oak_planks\n' +
'- oak_wood\n' +
'- crafting_table\n' +
'- white_carpet\n' +
'- oak_button\n' +
'- spruce_button\n' +
'- oak_pressure_plate\n' +
'- spruce_pressure_plate\n' +
'- stick\n'
},
{
role: 'assistant',
content: '\n' +
"Sweet! I'll start building a cozy little house. !placeHere('oak_planks', 64)"
},
{
role: 'system',
content: 'Command !placeHere was given 2 args, but requires 1 args.'
},
{
role: 'assistant',
content: '\n' +
"Let me see what I can build with these materials... Hmm, how about a nice little house? !craftRecipe('oak_wood', 1)"
},
{
role: 'system',
content: 'Code output:\nSuccessfully crafted oak_wood, you now have 67 oak_wood.\n'
}
]
Memory updated to: Memory: Started as Radley, greeted user Topo1717. Received request to build house using current inventory. Checked inventory (!inventory), displayed contents. Attempted to craft items relevant to building a house (!craftable). Prepared to use creative mode resources to construct house. Tried !placeHere command with incorrect arguments, failed. Displayed !craftable output and attempted to craft oak_wood with !craftRecipe. Successfully crafted oak_wood.
Full response: ""Sweet! I'll start building a cozy little house. !placeHere('oak_wood')""
parsed command: { commandName: '!placeHere', args: [ 'oak_wood' ] }
executing code...

placing block...
Agent executed: !placeHere and got: Code output:
Successfully placed oak_wood at (-74, 70, 65).

@Luca-Girotti
This is all due to llama3 8b being a relatively small model and unable to perform as well as the larger models. There is little we can do to improve it.

@BorbTheBird
Can you provide any additional information about your setup or any logs?

By my setup im assuming you mean my pc specs so:

  • AMD Ryzen 7 7700X
  • 32GB ram at 6000 MHz
  • AMD Radeon RX 7800 XT
  • Win 11 pro
  • MAG B650 TOMAHAWK WIFI

mindcraft and ollama server logs: https://pastebin.com/ndJXB3X7

[GIN] 2024/05/18 - 12:37:15 | 200 | 2m56s | 127.0.0.1 | POST "/api/chat"
[GIN] 2024/05/18 - 12:37:19 | 200 | 2m59s | 127.0.0.1 | POST "/api/chat"

yeah and as soon as i send another message, vram drops and the time=2024-05-18T12:37:36.642+10:00 level=INFO source=amd_windows.go:90 msg="unsupported Radeon iGPU detected skipping" id=1 name="AMD Radeon(TM) Graphics" gfx=gfx1036 happens again

edit: I created a program to test the inputs and it seems to be working fine

same issue