substack.com/@jdietzai/note/c-131340092

Preview meta tags from the substack.com website.

Linked Hostnames

1

Thumbnail

Search Engine Appearance

Google

https://substack.com/@jdietzai/note/c-131340092

John Dietz (@jdietzai)

I’m more on the filmmaking side of things, but techie enough to get into trouble, haha. The pipeline I’m doing is mainly different than yours because I’m relying on training the LLM models for the characters. So they have context of their own character knowledge and experience, so I’m relying on fine tuning for each character. I also have a story world LLMmodel, and a storytelling llm model. Storyworld has the rules and logic of the world, storytelling is more about knowledge and rules of storytelling itself. There is also a traffic cop agent that delegates to all these LLM models as their own agents, including a human who pushes and prods as the story needs. The idea is the characters reflect before they act or talk (from that agent whitepaper), so the traffic can take character reflections and prod the storyworld, the story telling LLM’s and the human user, if the character LLM’s need some nudging or some sort of context that may make foe a better storytelling interaction. The traffic cop sends any context back to the character LLM’s before they act or talk. So all these back and forth’s are queued, and run way way slower then realtime with the human involved. A fundamental ideas is long term memory and goals are fine tuning the models, rag is for medium term, and context memory is for for short term. The traffic cop and human decide when actions or dialogue go from short term to rag, and I’ll set up some schedule system to use rag to update fine tunes. This whole process is setup for storytelling purposes, a tool for the user to write better, not really anything scientific to study character behavior in AI or anything. I’m also busy with image training for design of characters and world, so it’s all going bit by bit.



Bing

John Dietz (@jdietzai)

https://substack.com/@jdietzai/note/c-131340092

I’m more on the filmmaking side of things, but techie enough to get into trouble, haha. The pipeline I’m doing is mainly different than yours because I’m relying on training the LLM models for the characters. So they have context of their own character knowledge and experience, so I’m relying on fine tuning for each character. I also have a story world LLMmodel, and a storytelling llm model. Storyworld has the rules and logic of the world, storytelling is more about knowledge and rules of storytelling itself. There is also a traffic cop agent that delegates to all these LLM models as their own agents, including a human who pushes and prods as the story needs. The idea is the characters reflect before they act or talk (from that agent whitepaper), so the traffic can take character reflections and prod the storyworld, the story telling LLM’s and the human user, if the character LLM’s need some nudging or some sort of context that may make foe a better storytelling interaction. The traffic cop sends any context back to the character LLM’s before they act or talk. So all these back and forth’s are queued, and run way way slower then realtime with the human involved. A fundamental ideas is long term memory and goals are fine tuning the models, rag is for medium term, and context memory is for for short term. The traffic cop and human decide when actions or dialogue go from short term to rag, and I’ll set up some schedule system to use rag to update fine tunes. This whole process is setup for storytelling purposes, a tool for the user to write better, not really anything scientific to study character behavior in AI or anything. I’m also busy with image training for design of characters and world, so it’s all going bit by bit.



DuckDuckGo

https://substack.com/@jdietzai/note/c-131340092

John Dietz (@jdietzai)

I’m more on the filmmaking side of things, but techie enough to get into trouble, haha. The pipeline I’m doing is mainly different than yours because I’m relying on training the LLM models for the characters. So they have context of their own character knowledge and experience, so I’m relying on fine tuning for each character. I also have a story world LLMmodel, and a storytelling llm model. Storyworld has the rules and logic of the world, storytelling is more about knowledge and rules of storytelling itself. There is also a traffic cop agent that delegates to all these LLM models as their own agents, including a human who pushes and prods as the story needs. The idea is the characters reflect before they act or talk (from that agent whitepaper), so the traffic can take character reflections and prod the storyworld, the story telling LLM’s and the human user, if the character LLM’s need some nudging or some sort of context that may make foe a better storytelling interaction. The traffic cop sends any context back to the character LLM’s before they act or talk. So all these back and forth’s are queued, and run way way slower then realtime with the human involved. A fundamental ideas is long term memory and goals are fine tuning the models, rag is for medium term, and context memory is for for short term. The traffic cop and human decide when actions or dialogue go from short term to rag, and I’ll set up some schedule system to use rag to update fine tunes. This whole process is setup for storytelling purposes, a tool for the user to write better, not really anything scientific to study character behavior in AI or anything. I’m also busy with image training for design of characters and world, so it’s all going bit by bit.

  • General Meta Tags

    14
    • title
      John Dietz (@jdietzai): "I’m more on the filmmaking side of things, but techie enough to get into trouble, haha. The pipeline I’m doing is mainly different than yours because I’m relying on training the LLM models for the characters. So they have context of their own character knowledge and experience…"
    • title
    • title
    • title
    • title
  • Open Graph Meta Tags

    9
    • og:url
      https://substack.com/@jdietzai/note/c-131340092
    • og:image
      https://substackcdn.com/image/fetch/$s_!XCt4!,w_400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack.com%2Fimg%2Freader%2Fnotes-thumbnail.jpg
    • og:image:width
      400
    • og:image:height
      400
    • og:type
      article
  • Twitter Meta Tags

    8
    • twitter:image
      https://substackcdn.com/image/fetch/$s_!XCt4!,w_400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack.com%2Fimg%2Freader%2Fnotes-thumbnail.jpg
    • twitter:card
      summary
    • twitter:label1
      Likes
    • twitter:data1
      0
    • twitter:label2
      Replies
  • Link Tags

    17
    • alternate
      https://substack.com/@jdietzai/note/c-131340092
    • apple-touch-icon
      https://substackcdn.com/icons/substack/apple-touch-icon.png
    • canonical
      https://substack.com/@jdietzai/note/c-131340092
    • icon
      https://substackcdn.com/icons/substack/icon.svg
    • manifest
      /manifest.json

Links

4