The gpt
is a cli tool that allows you send something to a OpenAI GPT API compatible service and get the response back.
gpt 'hello, who are you?'
gpt samples/hello.md
Please note there is a @samples/json.txt
in the samples/hello.md
file. Which will be loaded and replaced with the content of the file.
gpt -i samples/cat.jpg samples/image.md
-i
flag is used to specify the image file. can be used multiple times.
gpt -s samples/system.md "who are you?"
-s
flag is used to specify the system prompt. in this case, it force translate the user input to chinese language instead answering the question directly.
input
gpt -M "/xxxx/server.py" 'what is result of 223020320+2321?'
or
gpt -M "http://127.0.0.1:8000/sse" 'what is result of 223020320+2321?'
output
I am calculating the result of 223020320 + 2321.
2025/03/05 18:18:44 INFO Model call tool=add args="{\"a\":223020320,\"b\":2321}"
2025/03/05 18:18:44 INFO Model call result tool=add result="{Content:[{223022641 text}] Role:tool ToolCallID:call_74245828}"
The result of 223020320 + 2321 is 223022641.
the server.py
is a simple mcp server script, with a tool named add
, see MCP Python SDK
for more details.
-M
flag is used to specify the mcp server script. If you have a mcp server running, you can use this flag to send the request to the script. Ensure that the script is executable and correctly configured to process the input.
There are two kinds of mcp server:
-
Local mcp server, communicate with STDIN/STDOUT
- Local mcp server will be started as a child process.
- Each
-M
flag will start a new mcp server, the flag will be split by - Multiple
-M
flags can be used to start multiple mcp servers. All tools will be passed to the LLM. - The executable can be:
.py
python script,python3
will be used to run the script,.venv
will be used if exists..js
javascript script,node
will be used to run the script..ts
typescript script,bun
will be used to run the script..go
go script,go run
will be used to run the script..sh
.bash
.ps1
will be run as shell script.- any other executable file.
-
Remote mcp server, communicate with HTTP SSE
-M
flag will be used to specify the mcp server url, and the request will be sent to the url.
go get -u github.com/elsejj/gpt
After installation, you can do first run to generate the configuration file.
gpt
It's show you the app version, and the configuration file path. Then you can edit the configuration file to set the API key, Gateway URL (aka Open AI Base URL) and other settings.
You can use a LLM gateway such as Portkey-AI to serve the Non-OpenAI compatible API like Gemini, Claude, etc.
I had made a fork of the Portkey-AI gateway, which is available at llm-gateway, enable one key to visit multiple API services.
- copy
samples/powershell.md
/samples/bash.md
to configuration folder - for powershell, create a function in your profile
$PROFILE.CurrentUserCurrentHost
:
function pa {
$cmd = gpt -u -s powershell.md $args
Write-Host $cmd
Set-Clipboard -Value $cmd.Trim()
}
- for bash, add the following line to your
.bashrc
:
alias pa='gpt -u -s bash.md'
now you can use pa
to generate the powershell command. for example:
pa list all image files by date desc
will generateGet-ChildItem | Sort-Object LastWriteTime -Descending
and copy to clipboard.pa list recent 10 files
will generatels -lt | head -n 10
and copy to clipboard.