Chris Shinnimin

Final Fantasy Bot Project Journal

Final Fantasy Bot Project Journal

A fun personal project to learn LLMs and React, and rekindle my love of a favourite childhood game.

September 16, 2025

Prev | Project Journal Entries | Github | Next

FFBot Becomes an Agent: Altering NES RAM

Key learnings:

  • Differences between React Contexts and References - different tools for different situations.
    • Context state only updates at the end of a render cycle, making it susceptible to stale value issues if read from other functions, and updating it causes a re-render of the UI.
    • Reference state updates within a render cycle, no stale value issues, but updating it does not cause a re-render of the UI.
  • LLM Agent apps may require a "correction" engine.
    • For my specific use case, I am training the LLM to respond in a specific JSON format for my app. When it strays, it may need to be corrected. I am not yet sure if this is a pattern we would see in a professional agent application.

Creating a useTraining Hook and "Correction" Mechanism

This blog post covers work that occurred over the past few days this week. I first spent some time debugging a problem that manifested as parsing errors on the responses coming back from the LLM. I realized the LLM was not consistently following my training instructions to respond in specific JSON formats. I initially assumed the LLM was "forgetting" the instructions (initial assumption was wrong, more about that later), so I devised a new concept of a useTraining hook with an issueCorrection function where I will be able to invoke a process to message the LLM with a correction when it doesn't follow some training instruction I have for it. It ended up working great. After the LLM responded in plain text, the bot app automatically coaxed it back to responding in JSON. This does of course have a time and effort cost as another message needs to be sent to the LLM to correct it's behaviour. Ultimately I realized that the LLM was failing to respond with JSON because I was not properly tracking the conversation array (via my LLMMessage reference array). The cnversation array I was passing back to the LLM for context contained string responses for the assistant as if it had responded with string text, possibly making it think it should respond with string next. Once I cleaned up that bug, the corrective message was no longer needed, but I am leaving the mechanism in place in case it is needed in the future.

Building the RAM Write Request Hook

Once the application was ostensibly bug free once more, I set to work on implementing the RAM Write Request hook so that FFBot can update RAM in real time. I created a simple Python / Flask API endpoint the React app can use to drop contents onto the RAMDisk for the LUA Daemon to pick up and execute to update the RAM in the emulator. Since I had already successfully trained the LLM to write the required LUA scripts, hooking this all up was pretty simple. The React app just plops the script provided by the LLM into an HTTP POST request for the new Python endpoint.

Refactoring the sendLlMMessage into a Loop

I also spent a bit of time refactoring the main sendLlmMessage function to be easier to understand and contain a loop. We don't want it's method returning the agent response to a consumer (our consumer is our ChatContainer component) until we know we have it's final response, since the LLM can request services like the RAM reads and writes, and now can also require corrective messages. I will explain the following flow in more detail in todays video:

drawing

Prev | Project Journal Entries | Github | Next

Demo of Today's Accomplishments