As always, never rely on llms for anything factual. They’re only good with things which have a massive acceptance for error, such as entertainment (eg rpgs)
I tried using it to spit ball ideas for my DMing. I was running a campaign set in a real life location known for a specific thing. Even if I told it to not include that thing, it would still shoe horn it in random spots. It quickly became absolutely useless once I didn’t need that thing included
Sorry for being vague, I just didn’t want to post my home town on here
The issue for RPGs is that they have such “small” context windows, and a big point of RPGs is that anything could be important, investigated, or just come up later
Although, similar to how deepseek uses two stages (“how would you solve this problem”, then “solve this problem following this train of thought”), you could have an input of recent conversations and a private/unseen “notebook” which is modified/appended to based on recent events, but that would need a whole new model to be done properly which likely wouldn’t be profitable short term, although I imagine the same infrastructure could be used for any LLM usage where fine details over a long period are more important than specific wording, including factual things
As always, never rely on llms for anything factual. They’re only good with things which have a massive acceptance for error, such as entertainment (eg rpgs)
I tried using it to spit ball ideas for my DMing. I was running a campaign set in a real life location known for a specific thing. Even if I told it to not include that thing, it would still shoe horn it in random spots. It quickly became absolutely useless once I didn’t need that thing included
Sorry for being vague, I just didn’t want to post my home town on here
You can say Space Needle. We get it.
The issue for RPGs is that they have such “small” context windows, and a big point of RPGs is that anything could be important, investigated, or just come up later
Although, similar to how deepseek uses two stages (“how would you solve this problem”, then “solve this problem following this train of thought”), you could have an input of recent conversations and a private/unseen “notebook” which is modified/appended to based on recent events, but that would need a whole new model to be done properly which likely wouldn’t be profitable short term, although I imagine the same infrastructure could be used for any LLM usage where fine details over a long period are more important than specific wording, including factual things
The problem is that the “train of the thought” is also hallucinations. It might make the model better with more compute but it’s diminishing rewards.
Rpg can use the llms because they’re not critical. If the llm spews out nonsense you don’t like, you just ask to redo, because it’s all subjective.
Or at least as an assistant on a field your an expert in. Love using it for boilerplate at work (tech).
Nonsense, I use it a ton for science and engineering, it saves me SO much time!
Do you blindly trust the output or is it just a convenience and you can spot when there’s something wrong? Because I really hope you don’t rely on it.
How could I blindly trust anything in this context?