blog.apiad.net/p/ai-storytelling-1/comment/131340092
Preview meta tags from the blog.apiad.net website.
Linked Hostnames
2Thumbnail

Search Engine Appearance
John Dietz on The Computist Journal
I’m more on the filmmaking side of things, but techie enough to get into trouble, haha. The pipeline I’m doing is mainly different than yours because I’m relying on training the LLM models for the characters. So they have context of their own character knowledge and experience, so I’m relying on fine tuning for each character. I also have a story world LLMmodel, and a storytelling llm model. Storyworld has the rules and logic of the world, storytelling is more about knowledge and rules of storytelling itself. There is also a traffic cop agent that delegates to all these LLM models as their own agents, including a human who pushes and prods as the story needs. The idea is the characters reflect before they act or talk (from that agent whitepaper), so the traffic can take character reflections and prod the storyworld, the story telling LLM’s and the human user, if the character LLM’s need some nudging or some sort of context that may make foe a better storytelling interaction. The traffic cop sends any context back to the character LLM’s before they act or talk. So all these back and forth’s are queued, and run way way slower then realtime with the human involved. A fundamental ideas is long term memory and goals are fine tuning the models, rag is for medium term, and context memory is for for short term. The traffic cop and human decide when actions or dialogue go from short term to rag, and I’ll set up some schedule system to use rag to update fine tunes. This whole process is setup for storytelling purposes, a tool for the user to write better, not really anything scientific to study character behavior in AI or anything. I’m also busy with image training for design of characters and world, so it’s all going bit by bit.
Bing
John Dietz on The Computist Journal
I’m more on the filmmaking side of things, but techie enough to get into trouble, haha. The pipeline I’m doing is mainly different than yours because I’m relying on training the LLM models for the characters. So they have context of their own character knowledge and experience, so I’m relying on fine tuning for each character. I also have a story world LLMmodel, and a storytelling llm model. Storyworld has the rules and logic of the world, storytelling is more about knowledge and rules of storytelling itself. There is also a traffic cop agent that delegates to all these LLM models as their own agents, including a human who pushes and prods as the story needs. The idea is the characters reflect before they act or talk (from that agent whitepaper), so the traffic can take character reflections and prod the storyworld, the story telling LLM’s and the human user, if the character LLM’s need some nudging or some sort of context that may make foe a better storytelling interaction. The traffic cop sends any context back to the character LLM’s before they act or talk. So all these back and forth’s are queued, and run way way slower then realtime with the human involved. A fundamental ideas is long term memory and goals are fine tuning the models, rag is for medium term, and context memory is for for short term. The traffic cop and human decide when actions or dialogue go from short term to rag, and I’ll set up some schedule system to use rag to update fine tunes. This whole process is setup for storytelling purposes, a tool for the user to write better, not really anything scientific to study character behavior in AI or anything. I’m also busy with image training for design of characters and world, so it’s all going bit by bit.
DuckDuckGo
John Dietz on The Computist Journal
I’m more on the filmmaking side of things, but techie enough to get into trouble, haha. The pipeline I’m doing is mainly different than yours because I’m relying on training the LLM models for the characters. So they have context of their own character knowledge and experience, so I’m relying on fine tuning for each character. I also have a story world LLMmodel, and a storytelling llm model. Storyworld has the rules and logic of the world, storytelling is more about knowledge and rules of storytelling itself. There is also a traffic cop agent that delegates to all these LLM models as their own agents, including a human who pushes and prods as the story needs. The idea is the characters reflect before they act or talk (from that agent whitepaper), so the traffic can take character reflections and prod the storyworld, the story telling LLM’s and the human user, if the character LLM’s need some nudging or some sort of context that may make foe a better storytelling interaction. The traffic cop sends any context back to the character LLM’s before they act or talk. So all these back and forth’s are queued, and run way way slower then realtime with the human involved. A fundamental ideas is long term memory and goals are fine tuning the models, rag is for medium term, and context memory is for for short term. The traffic cop and human decide when actions or dialogue go from short term to rag, and I’ll set up some schedule system to use rag to update fine tunes. This whole process is setup for storytelling purposes, a tool for the user to write better, not really anything scientific to study character behavior in AI or anything. I’m also busy with image training for design of characters and world, so it’s all going bit by bit.
General Meta Tags
16- titleComments - AI-Driven Storytelling with Multi-Agent LLMs - Part I
- title
- title
- title
- title
Open Graph Meta Tags
7- og:urlhttps://blog.apiad.net/p/ai-storytelling-1/comment/131340092
- og:imagehttps://substackcdn.com/image/fetch/$s_!65HS!,f_auto,q_auto:best,fl_progressive:steep/https%3A%2F%2Fapiad.substack.com%2Ftwitter%2Fsubscribe-card.jpg%3Fv%3D1292391378%26version%3D9
- og:typearticle
- og:titleJohn Dietz on The Computist Journal
- og:descriptionI’m more on the filmmaking side of things, but techie enough to get into trouble, haha. The pipeline I’m doing is mainly different than yours because I’m relying on training the LLM models for the characters. So they have context of their own character knowledge and experience, so I’m relying on fine tuning for each character. I also have a story world LLMmodel, and a storytelling llm model. Storyworld has the rules and logic of the world, storytelling is more about knowledge and rules of storytelling itself. There is also a traffic cop agent that delegates to all these LLM models as their own agents, including a human who pushes and prods as the story needs. The idea is the characters reflect before they act or talk (from that agent whitepaper), so the traffic can take character reflections and prod the storyworld, the story telling LLM’s and the human user, if the character LLM’s need some nudging or some sort of context that may make foe a better storytelling interaction. The traffic cop sends any context back to the character LLM’s before they act or talk. So all these back and forth’s are queued, and run way way slower then realtime with the human involved. A fundamental ideas is long term memory and goals are fine tuning the models, rag is for medium term, and context memory is for for short term. The traffic cop and human decide when actions or dialogue go from short term to rag, and I’ll set up some schedule system to use rag to update fine tunes. This whole process is setup for storytelling purposes, a tool for the user to write better, not really anything scientific to study character behavior in AI or anything. I’m also busy with image training for design of characters and world, so it’s all going bit by bit.
Twitter Meta Tags
8- twitter:imagehttps://substackcdn.com/image/fetch/$s_!65HS!,f_auto,q_auto:best,fl_progressive:steep/https%3A%2F%2Fapiad.substack.com%2Ftwitter%2Fsubscribe-card.jpg%3Fv%3D1292391378%26version%3D9
- twitter:cardsummary_large_image
- twitter:label1Likes
- twitter:data10
- twitter:label2Replies
Link Tags
33- alternate/feed?sectionId=193045
- apple-touch-iconhttps://substackcdn.com/image/fetch/$s_!3m8d!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3850122-e10e-4d61-bbdf-ef1d7ebafcab%2Fapple-touch-icon-57x57.png
- apple-touch-iconhttps://substackcdn.com/image/fetch/$s_!tKZF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3850122-e10e-4d61-bbdf-ef1d7ebafcab%2Fapple-touch-icon-60x60.png
- apple-touch-iconhttps://substackcdn.com/image/fetch/$s_!eMBI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3850122-e10e-4d61-bbdf-ef1d7ebafcab%2Fapple-touch-icon-72x72.png
- apple-touch-iconhttps://substackcdn.com/image/fetch/$s_!R7by!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3850122-e10e-4d61-bbdf-ef1d7ebafcab%2Fapple-touch-icon-76x76.png
Links
13- https://blog.apiad.net
- https://blog.apiad.net/p/ai-storytelling-1/comment/131340092
- https://blog.apiad.net/p/ai-storytelling-1/comments#comment-131340092
- https://substack.com
- https://substack.com/@jdietzai/note/c-131340092