Bash Functions To Make Ollama Slightly Easier To Run In Docker
I'm not gonna get into the LLM/"AI" debate right now. I'm not a fan of it but also not a purist and for a variety of reasons I've had to learn a little about it. Fortunately, I have yet to actually need to use cloud-based models and have been able to use local ones just fine. Running models locally doesn't get around all the ethical implications but it goes a long way, in my estimation. To that end, I've found Ollama to be the easiest way to get up and running. I like the idea of not having this natively installed, especially since the only option is to manually install the binary (or curl | sh it :/). Fortunately, Ollama publishes a container image.…
