Release: llm-questioncache
I just released version 0.1 of a plugin for Simon Willison's llm
called llm-questioncache
. It lets you send questions to your default LLM with a system prompt that elicits short, to-the-point answers. It also maintains a cache of answers locally so that you only have to hit the LLM once for each bit of esoteric knowledge.
It uses embeddings of each question to find similar questions so that (for example) if you ask
How do you compare two branches in git
and
How to compare different branches in git
you'll get the same answer.
If you've already got LLM installed you can try it out with
llm install llm-questioncache
Here's the PyPI package:
https://pypi.org/project/llm-questioncache/
And here's the source code:
https://github.com/nathanielknight/llm-questioncache