“Yo” the “Shell AI Assistant”: Using IBM AI Model(granite3.1-dense:2b) with Ollama to Talk to Your OS
Introduction
Shell scripting is a vital skill for automating system tasks, but writing scripts manually can be tedious. Why don’t we use the power of Large Language Models and all they know about shell scripting or anything that runs in cli to help us execute shell commands without knowing anything about shell scripting.

With Ollama, a local AI model, we can achieve this!
Let’s get started! we’ll :
- Install Ollama
- Download and serve our model
- Create a custom command (
yo
) to generate shell scripts using AI - Use Ollama to process natural language requests
- Automatically execute the generated script from the current directory
Ollama
Ollama is an open-source framework that enables developers to run and customize large language models (LLMs) efficiently on local machines. It provides an easy way to download, fine-tune, and serve models with a simple command-line interface, eliminating the need for complex cloud-based deployments.
Ollama is designed for performance and flexibility, allowing users to integrate AI capabilities into applications while maintaining control over data privacy and computational resources.
Installing Ollama
Installing Ollama is very easy; you can follow instructions here: https://github.com/ollama/ollama for your mac or windows.
For this Article I am using an IBM Granite model called granite 3.1-dense: https://ollama.com/library/granite3.1-dense . This is a smaller model compare to a lot of models out there and at the same time is very efficient for anything we require for this exercise.

Once Ollama is installed you can pull any model available in Ollama into your machine and run it and now you are serving a large language model on your machine;

Creating the “yo” Command
We’ll define a Bash function named yo, which will interact with Ollama to generate and execute scripts dynamically.
First you’d need to download the script os_llm.sh on your machine from the Github here: https://github.com/ijgitsh/os_tools/blob/main/os_llm.sh
This script is based on the model mentioned above; however if you want to use a different model you need to update the script in line 13 where the model is mentioned, to use your own model; you’d need to modify the model in below
OLLAMA_OUTPUT= $(ollama run <model> " you are an expert ....

Once that is done then run below command which basically everytime you type yo it will point to this shell script
alias yo='sh <full path>/os_llm.sh'
if you want to know what the full path is simply run “pwd” below command to see the full path to your shell script

based on this I would modify the alias command to
alias yo='sh /Users/imanjohari/Documents/GitHub/os_tool/os_llm.sh'
Once completed we need to run one more command and we are good to go; run below command which adds this function to your shell configuration
source ~/.zshrc
Have fun!
I have tried various shell scripts and it has been very promising; you can further
- Improve Extraction: Better parsing for more reliable execution.
- Save Scripts: Log generated scripts for future reuse.
- Custom AI Models: Experiment with different Ollama models.try your own
here are some of my tries with various commands

you can even run things for containers - the sky is the limit :)


Conclusion
With this simple setup, you no longer need to manually write shell scripts for common tasks. Just use “yo” and let AI handle the rest! This is a great productivity hack for developers, system admins, and DevOps engineers.
Try it out and let me know what you think!
Resources
Code git: https://github.com/ijgitsh/os_tools
Ollama: https://github.com/ollama/ollama
Ollama git: https://github.com/ollama/ollama
IBM Granite3.1-dense model: https://ollama.com/library/granite3.1-dense
IBM Granite on hugging face: https://huggingface.co/ibm-granite/granite-3.1-2b-base