How to run AI Prompt from the command line
This page provides information on how to run AI prompts from the command line using aiprompt.exe. You can download the aiprompt.exe from this page, and you can see how it can be used from cmd.exe or powershell. Let's get started!
Download
Download the executable: AIPrompt.exe
Download the executable as a zip: AIPrompt.zip
What is AI Prompt
The AI Prompt is a command line tool which makes it possible to run AI prompts on the command line, that are evaluated by HTTP AI APIs, such as Ozeki AI Server or OpenAI (ChatGPT).
Github
This project is available on Github with full source code under MIT license. Please take a look at the Github project:
https://github.com/ozekiweb/AIPromptExeFeatures:
- Lightweight: Allows users to send prompts directly to HTTP AI APIs from the command line.
- Flexible: Read prompts from Standard I/O, send a simple prompt or customize your request using JSON, use HTTP and HTTPS as well
- Multiple API Support: Supports Ozeki 10 HTTP API, OpenAI API, Deepseek and all other OpenAI API compatible providers.
- Configurable: Easy to set your preferences via command-line arguments and environment variables.
- Logging Mode: Provides detailed logging to facilitate debugging and monitoring.
- Interactive Mode: Chat with an LLM by sending multiple prompts through HTTP
- Fast deployment: Ready to use, self-contained executable file is available to download.
- Open Source: This tool is fully open sourced, enabling users to build, inspect, and modify the source code according to their needs
How to run AI Prompt from the command line (Easy Steps)
To run AI Prompt from the command line:
- Download AI Prompt
- Create HTTP API user in Ozeki
- Send a prompts using HTTP API user
- Send JSON prompts
- Send multiple prompts with Interactive mode
- Command line parameters
- Usage examples
- Environment variables
- Finding errors
How to download Ozeki AI Prompt (Video tutorial)
This video tutorial shows how to download Ozeki AI Prompt from the Ozeki Website. It also shows how to test if your installation was successful, by checking the version number.
How to download Ozeki AI Prompt
To start the download process, click here or on the blue AIPrompt.exe link located below the introduction.
To monitor the download progress, look at the downloads bubble, which can be seen on the right side of the browsers toolbar. The aiprompt.exe is very lightweight, so the download might finish instantly.
After the download has finished, copy the aiprompt.exe executable file to the C:\AIPrompt location. You can choose an another location if you wish, but it is recommended to use C:\AIPrompt to keep things tidy.
To test if the file has been downloaded correctly, open a Command Prompt with a working directory set to the C:\AIPrompt folder. Type cmd in the address bar of the File Explorer and click on the arrow located on the address bar's right side. (Figure 5)
To verify if the aiprompt.exe is working correctly, check its version. To check the version, enter the following command:
aiprompt.exe -v
After running the command, the version number of Ozeki AI Prompt will be printed to the console. If the version checking isn't working for you, make sure you completed the previous steps correctly.
How to create an HTTP API user in Ozeki AI Server (Video tutorial)
This video demonstrates how to create an HTTP AI user in Ozeki AI Server It also shows how to enable communacation event logs, which will be useful to confirm your HTTP AI user works properly later on.
How to create an HTTP API user in Ozeki AI Server
To gain access to the Ozeki HTTP AI API, you will need a HTTP user to authenticate yourself to the AI Server.
To create a HTTP user, first open AI Studio by clicking its desktop icon on the Ozeki Desktop:
After opening the AI Studio, click on the AI Gateway tab located at the top right corner.
On the AI Gateway dashboard, click on the blue "Add new AI chat bot" link, which can be found at the AI chat bots panel. This will navigate you to the "Add new AI chat bot" install list.
Click on the "Install" link of the "HTTP User" install item, located under the "Network users" install section.
Enter a username and password into the input fields, which can be found on the general tab, inside the "Login information" groupbox. Throughout the guides the username "user" and the password "pass" will be used, but you can use any values.
Select the Advanced tab. Inside the Log level groupbox, tick in the "Log communication events" checkbox. This will be useful to test if the creation of the HTTP user was successful. Click the OK button to finalize the settings.
Clicking the OK button will redirect you to the details page of your newly created HTTP user, with the Events tab page already open.
To test the HTTP user, open the Command Prompt at the location of your installed AI Prompt. Run aiprompt.exe with following options to send a basic prompt using your HTTP User:
aiprompt.exe "What color is an apple" -u user -p pass
Change the username and password to the username and password of your HTTP User.
Wait for a response from the AI Server. If everything was set correctly until this point, AI Prompt will show a response like in Figure 16.
Head back to the HTTP user's details page. There should be a new log entry in the log viewer, confirming the HTTP user works properly.
If you wish to create an Ozeki API Key for authentication, check out our dedicated guide about generating API keys.
How to send a prompt using the HTTP API user (Video tutorial)
This tutorial demonstrates how to send a prompt using the HTTP API user.
How to send a prompt using the HTTP API user
Before you go any further, you'll need your own LLM model. Check out the Ozeki AI Quick Start guide to find out how to set up a LLM model in Ozeki AI Studio.
Configuring the application
Before sending your first basic prompt to your local Ozeki AI Server using AI Prompt, let's check out few important settings and how to change them:
Prompt
The prompt contains the instruction you want to give to your AI Model. When sending a basic prompt, you only have to enter the content you want to send, and Ozeki AI Prompt does the heavy lifting for you by configuring.
Credentials
To connect to a AI Server, you must provide the credentials to a HTTP user. Use the -u option to set the username and the -p option to specify the password.
Server URL
To connect to your Ozeki AI Server, you need to specify the network address where your AI Model can be accessed through HTTP API. By default the URL is the same as the Ozeki AI Server's address: http://localhost:9509/api?command=chatgpt. If you are using AI Prompt on the same system where the AI Server is running with default settings, you don't need to change the URL argument. To set the Server URL, use the -h option.
Model
You can set which AI Model should the AI Server use to answer the prompt. By default this is set to GGUF_Model_1, which is the default name of the first GGUF Model created in AI Studio. If your AI Model of choice is named differently on the AI Server, use the -m option to set your desired model.
Send basic prompt
Important note: The output of AI Prompt may vary, as the LLM model you're using might differ from the one used in the examples provided, thus making the AI's response different.
To send a basic prompt, specify the required options:
::Change the values to you configuration aiprompt.exe "Provide a pancake recipe for me." -u user -p pass -h http://localhost:9509/api?command=chatgpt -m GGUF_Model_1
You can either type in or paste the command above.
Output:
Role: assistant Content: Sure! Here's a simple pancake recipe that you can make with those ingredients: ### Basic Pancakes #### Ingredients: - 1 cup all-purpose flour - 2 tablespoons sugar - 1 tablespoon baking powder - 1/4 teaspoon salt - 1 cup milk - 1 large egg - 2 tablespoons melted butter (or oil) #### Instructions: 1. **Mix Dry Ingredients:** - In a large bowl, whisk together the flour, sugar, baking powder, and salt. 2. **Mix Wet Ingredients:** - In a separate bowl, beat the egg and then add the milk and melted butter (or oil). Mix well. 3. **Combine Wet and Dry Ingredients:** - Pour the wet ingredients into the bowl with the dry ingredients. Stir until just combined. Be careful not to overmix; the batter should be slightly lumpy. 4. **Preheat Pan:** - Heat a non-stick skillet or griddle over medium heat. You can test if it's ready by sprinkling a few drops of water on it; they should sizzle and evaporate. 5. **Cook Pancakes:** - Pour 1/4 cup of batter onto the skillet for each pancake. Cook until bubbles form on the surface and the edges look set, about 2-3 minutes. - Flip the pancake and cook for another 1-2 minutes, or until golden brown and cooked through. 6. **Serve:** - Serve warm with your favorite toppings, such as maple syrup, fruit, or butter. Enjoy your homemade pancakes!
Use default values
In the previous example, we specified all options required to run AI Prompt. The URL and Model specified were the default values, so they don't need to be set explicitly:
::Change the credentials to your settings aiprompt.exe "Describe elephants for me" -u user -p pass
Output:
Role: assistant Content: Elephants are large, majestic mammals known for their impressive size, intelligence, and social behavior. There are three main species: the African elephant, the African bush elephant, and the African forest elephant. The Asian elephant is another species, found primarily in Southeast Asia. 1. **Size and Appearance**: Elephants are the largest land animals. African elephants are the largest, with males (bulls) standing up to 13 feet tall and weighing up to 14,000 pounds. Asian elephants are smaller, with males reaching about 11 feet in height and weighing up to 11,000 pounds. They have distinctive features such as long trunks, large ears, and tusks (which are present in males and some females). 2. **Trunk**: The trunk is a versatile appendage used for breathing, smelling, touching, grasping, and producing sound. It is an elongated fusion of the nose and upper lip. 3. **Ears**: Elephants have large ears that help regulate their body temperature. African elephants have large, fan-shaped ears, while Asian elephants have smaller, rounded ears. 4. **Tusks**: Tusks are elongated, continuously growing front teeth made of ivory. They are used for digging, foraging, and defense. Not all elephants have tusks; some females and Asian elephants may be tuskless. 5. **Social Structure**: Elephants are highly social animals, living in matriarchal herds led by an older female. Males often live solitary lives or form bachelor groups. 6. **Intelligence**: Elephants are known for their high intelligence, memory, and emotional depth. They exhibit behaviors such as mourning their dead, helping injured individuals, and displaying empathy. 7. **Habitat**: Elephants inhabit a variety of environments, including savannas, forests, deserts, and marshes, depending on the species. 8. **Diet**: They are herbivores, feeding on grasses, leaves, fruits, and bark. Their diet can vary based on their habitat. 9. **Threats**: Elephants face threats from habitat loss, human-wildlife conflict, and poaching for their ivory. 10. **Conservation**: Efforts are ongoing to protect elephants and their habitats, with various international laws and conservation programs in place.
JSON prompts
Ozeki AI Prompt supports JSON formatted prompts as well. To enable JSON mode, use the -j option. Check out the How to use JSON to execute an AI prompt guide to learn more about JSONs in general, and how to use them with AI Prompt.
What is interactive mode
Interactive mode allows users to engage directly with the Ozeki AI Prompt in real-time, making it a convenient and dynamic way to interact with your LLM model through Ozeki AI Server. In this mode, you can type prompts directly into the terminal and receive responses immediately, enabling an intuitive, conversational workflow. This is particularly useful for exploratory tasks, testing, or scenarios where step-by-step interaction with the AI is needed without the need to redefine prompts.
How to use interactive mode (Video tutorial)
In this short video demonstration, you will learn how to use interactive mode in Ozeki AI Prompt. The video will show you how to set your initial configuration and prompt, how to send additional prompts, and how to exit the interactive mode.
How to use interactive mode
Interactive mode is compatible with every other option Ozeki AI Prompt supports: you can use either use API key or HTTP user for credentials, you can change the server URL, the model, and you can use JSON prompts as well. Let's see an example with basic prompt as initial prompt, and HTTP user authentication.
Run AI Prompt with interactive mode enabled
Use the -i option to enable interactive mode:
::Change the credentials to yours aiprompt.exe "What is the capital of UK" -u user -p pass -i
Wait for initial response
After running the AI Prompt in interactive mode, AI Prompt will wait for a response to your initial request (Figure 20). When the response arrives, AI Prompt will show you the current state of your chat (Figure 21). After the messages, the "Enter your prompt: " input shows up, where you can type your next prompt you wish to send.
Output:
Messages from the initial prompt: Message #1: Role: assistant Content: You are an assistant. Message #2: Role: user Content: What is the capital of UK Message #3: Role: assistant Content: The capital of the United Kingdom is London. It is also the largest city in the UK and a major global city known for its history, culture, and economic significance. Enter your prompt:
Send another prompt directly
Type in your next response, then press enter to send it. AI Prompt will wait for a response again (Figure 22).
If the response arrives successfully, it will be shown below your
Output:
Response: Role: assistant Content: Certainly! Here are three of the most popular tourist attractions in London: 1. **The British Museum**: Known for its extensive collection of art and artifacts from around the world, including the Rosetta Stone and the Elgin Marbles. 2. **The Tower of London**: A historic castle located on the north bank of the River Thames, famous for its role in English history and as home to the Crown Jewels. 3. **Big Ben and the Houses of Parliament**: Iconic landmarks featuring the famous clock tower, Big Ben, and the stunning architecture of the UK's seat of government. These attractions are must-see destinations for visitors to London. Enter your prompt:
Output:
Response: Role: assistant Content: The British Museum, located in London, is one of the world's leading museums and research institutions. Established in 1753, it is renowned for its vast and diverse collection of over 8 million objects, which span over two million years of human history. Here are some highlights and features of the British Museum: 1. **Collections**: The museum houses artifacts from all over the world, including the Rosetta Stone, the Elgin Marbles, the Egyptian mummies, and the Parthenon sculptures. It covers a wide range of cultures and time periods, from ancient Egypt and Greece to the Americas and Asia. 2. **Architecture**: The museum's neoclassical building, designed by Sir John Soane, is a work of art in itself. The Great Court, a striking glass and steel structure designed by Foster and Partners, was added in the 2000s and is a popular spot for visitors. 3. **Research and Education**: The British Museum is not just a tourist attraction; it is also a center for research and education. It offers lectures, workshops, and exhibitions that provide deeper insights into its collections. 4. **Exhibitions**: The museum frequently hosts special exhibitions on various themes, drawing from its extensive collections and beyond. These exhibitions are often accompanied by educational programs and events. 5. **Accessibility**: The museum is free to enter, making it accessible to a wide audience. However, there may be charges for special exhibitions. 6. **Cultural Significance**: As one of the oldest and most comprehensive museums in the world, the British Museum plays a crucial role in preserving cultural heritage and promoting understanding among different cultures. Overall, the British Museum is a treasure trove of human history and culture, offering visitors an unparalleled experience of the world's artistic and historical achievements. Enter your prompt:
Exit the program
To exit the AI Prompt interactive mode, type exit into the "Enter your prompt: " input, then press enter. The working directory will show up in your terminal, confirming you successfully exited the interactive mode of AI Prompt.
Check out the full output of your interactive session. If your messages can't fit into your terminal, you can scroll up and down using your mouse wheel.
Full output of the program:
Messages from the initial prompt: Message #1: Role: assistant Content: You are an assistant. Message #2: Role: user Content: What is the capital of UK Message #3: Role: assistant Content: The capital of the United Kingdom is London. It is also the largest city in the UK and a major global city known for its history, culture, and economic significance. Enter your prompt: Give me the three most popular tourist attractions in London Response: Role: assistant Content: Certainly! Here are three of the most popular tourist attractions in London: 1. **The British Museum**: Known for its extensive collection of art and artifacts from around the world, including the Rosetta Stone and the Elgin Marbles. 2. **The Tower of London**: A historic castle located on the north bank of the River Thames, famous for its role in English history and as home to the Crown Jewels. 3. **Big Ben and the Houses of Parliament**: Iconic landmarks featuring the famous clock tower, Big Ben, and the stunning architecture of the UK's seat of government. These attractions are must-see destinations for visitors to London. Enter your prompt: Tell me more about The British Museum Response: Role: assistant Content: The British Museum, located in London, is one of the world's leading museums and research institutions. Established in 1753, it is renowned for its vast and diverse collection of over 8 million objects, which span over two million years of human history. Here are some highlights and features of the British Museum: 1. **Collections**: The museum houses artifacts from all over the world, including the Rosetta Stone, the Elgin Marbles, the Egyptian mummies, and the Parthenon sculptures. It covers a wide range of cultures and time periods, from ancient Egypt and Greece to the Americas and Asia. 2. **Architecture**: The museum's neoclassical building, designed by Sir John Soane, is a work of art in itself. The Great Court, a striking glass and steel structure designed by Foster and Partners, was added in the 2000s and is a popular spot for visitors. 3. **Research and Education**: The British Museum is not just a tourist attraction; it is also a center for research and education. It offers lectures, workshops, and exhibitions that provide deeper insights into its collections. 4. **Exhibitions**: The museum frequently hosts special exhibitions on various themes, drawing from its extensive collections and beyond. These exhibitions are often accompanied by educational programs and events. 5. **Accessibility**: The museum is free to enter, making it accessible to a wide audience. However, there may be charges for special exhibitions. 6. **Cultural Significance**: As one of the oldest and most comprehensive museums in the world, the British Museum plays a crucial role in preserving cultural heritage and promoting understanding among different cultures. Overall, the British Museum is a treasure trove of human history and culture, offering visitors an unparalleled experience of the world's artistic and historical achievements. Enter your prompt: exit
It is this easy to use interactive mode in AI Prompt! Interactive mode's intuitive workflow, flexibility and feedback mode makes it great to use in situations where multiple prompts must be sent in a conversational format. In the next section, you will learn about the command line parameters of the AI Prompt, making you an expert in configuring the application!
Command line parameters
The table contains the paramaters that can be passed to configure the application.
Argument | Description | Default Value |
---|---|---|
<prompt> | The prompt to be sent to the HTTP AI API. | - |
-h <url> | Specifies the URL of the server. | http://localhost:9509/api?command=chatgpt |
-u <username> | Specifies the username. | - |
-p <password> | Specifies the password. | - |
-a <apikey> | Specifies the API key. | - |
-j | Specifies if the prompt is in JSON format | False |
-m <model> | Specifies the model name. | GGUF_Model_1 |
-l | Enables logging mode. | False |
-i | Enables interactive mode. | False |
-v | Displays version information. | - |
-? | Displays help and usage information. | - |
To run the tool, the prompt and some credentials must be specified. The prompt can be set using the Standard Input or the prompt argument. For the credentials you either have to set the username and password of the HTTP user or your API key. If both are defined, the API key will be used for authentication
Examples
Send basic prompt with HTTP User Authentication:
aiprompt.exe "What is the capital of UK" -h http://localhost:9509/api?command=chatgpt -u user -p pass -m GGUF_Model_1
Output:
Role: assistant Content: The capital of the United Kingdom is London. It is also the largest city in the UK and a major global city known for its history, culture, and economic significance.
Send JSON prompt with API Key Authentication using standard I/O and logging mode:
echo {"model": "GGUF_Model_1", "messages":[{"role": "user", "content": "What is the capital of Germany"}]} | aiprompt.exe -h http://localhost:9509/api?command=chatgpt -a your_api_key -jl
Output:
2025-01-24 15:13:44 [DEBUG] Using StandardInput as prompt. 2025-01-24 15:13:44 [DEBUG] Parsing URL 2025-01-24 15:13:44 [DEBUG] Parsing URL done 2025-01-24 15:13:44 [DEBUG] Checking if URL fits HTTP or HTTPS scheme 2025-01-24 15:13:44 [DEBUG] Checking done 2025-01-24 15:13:44 [DEBUG] Creating Authorisation Header 2025-01-24 15:13:44 [DEBUG] Using API Authorisation Header 2025-01-24 15:13:44 [DEBUG] Creating Authorisation Header done! 2025-01-24 15:13:44 [DEBUG] Creating Request Body 2025-01-24 15:13:44 [DEBUG] Checking if JSON is valid 2025-01-24 15:13:44 [DEBUG] {"model": "GGUF_Model_1", "messages":[{"role": "user", "content": "What is the capital of Germany"}]} 2025-01-24 15:13:44 [DEBUG] JSON is valid. 2025-01-24 15:13:44 [DEBUG] Request body done 2025-01-24 15:13:44 [DEBUG] Setting up request 2025-01-24 15:13:44 [DEBUG] Standard execution 2025-01-24 15:13:44 [DEBUG] HTTP Request: Method: POST, RequestUri: 'http://localhost:9509/api?command=chatgpt', Version: 1.1, Content: System.Net.Http.StringContent, Headers: { Authorization: Bearer api_key Content-Type: application/json; charset=us-ascii } 2025-01-24 15:13:44 [DEBUG] Content: {"model":"GGUF_Model_1","messages":[{"role":"user","content":"What is the capital of Germany"}]} 2025-01-24 15:13:44 [DEBUG] Sending HTTP Request... 2025-01-24 15:13:46 [DEBUG] HTTP Request sent: Method: POST, RequestUri: 'http://localhost:9509/api?command=chatgpt', Version: 1.1, Content: System.Net.Http.StringContent, Headers: { Authorization: Bearer api_key Content-Type: application/json; charset=us-ascii Content-Length: 96 } 2025-01-24 15:13:46 [DEBUG] HTTP Response: 200 OK 2025-01-24 15:13:46 [DEBUG] { "id": "chatcmpl-IEAUZYYZKCIKIWJMBRHERESPEUWNL", "object": "chat.completion", "created": 1737731626, "model": "GGUF_Model_1", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "The capital of Germany is Berlin.", "refusal": null }, "logprobs": null, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 0, "completion_tokens": 0, "total_tokens": 0, "completion_tokens_details": { "reasoning_tokens": 0 } }, "system_fingerprint": "fp_f85bea6784" }
Read JSON prompt from file with HTTP User Authentication
::Contents of prompt.json { "model":"GGUF_Model_1", "messages":[ { "role":"user", "content":"Which country's capital is Paris" } ] } ::Command: type prompt.json | aiprompt.exe -h http://localhost:9509/api?command=chatgpt -u user -p pass -j
Output:
{ "id": "chatcmpl-UXIOOBNMGSPPQIQWQBHUOYNGBDAXD", "object": "chat.completion", "created": 1737731364, "model": "GGUF_Model_1", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "The capital of France is Paris.", "refusal": null }, "logprobs": null, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 0, "completion_tokens": 0, "total_tokens": 0, "completion_tokens_details": { "reasoning_tokens": 0 } }, "system_fingerprint": "fp_f85bea6784" }
Chat with an AI Model by using Interactive Mode and HTTP User Authentication:
aiprompt.exe "What is the capital of Germany?" -h http://localhost:9509/api?command=chatgpt -u user -p pass -i
Environment variables
AI Prompt can be configured with the following environment variables
Key | Usage | Accepted Values |
---|---|---|
OZEKI_AIPROMPT_URL | Specifies the URL of the server | URLs with http or https scheme |
OZEKI_AIPROMPT_USERNAME | Specifies the username for authentication | string |
OZEKI_AIPROMPT_PASSWORD | Specifies the password for authentication | string |
OZEKI_AIPROMPT_APIKEY | Specifies the API key | string |
OZEKI_AIPROMPT_USE_JSON | Specifies if the prompt is in JSON format | True, False |
OZEKI_AIPROMPT_MODEL | Specifies the model name | string |
To learn more about environment variables, advantages and usage with AI Prompt, check out the How to configure AI prompt with environment variables guide.
Finding errors
If something went wrong with running the aiprompt.exe, the best way to find the issue is to turn on verbose logging mode.
What is verbose logging mode in Ozeki AI Prompt?
Verbose logging mode provides detailed, real-time insights into the application's operations, enabling you to monitor its behavior and troubleshoot issues effectively. By turning on verbose logging, you can view information such as HTTP requests, responses, and potential errors, making it easier to identify and resolve problems. Logging mode supports all running options: Basic prompt, JSON prompt and interactive mode as well.
How to turn on verbose logging mode in Ozeki AI Prompt
To turn on verbose logging, specify the -l option:
aiprompt.exe -l
Example
Using AI Prompt without logging:
::Change the credentials to your HTTP user's username and password aiprompt.exe "Write few sentences about fishing" -u user -p pass
Output:
Role: assistant Content: Fishing is a popular recreational activity that involves catching fish from various bodies of water such as rivers, lakes, and oceans. It can be done for sport, relaxation, or sustenance. Anglers use different techniques and equipment, including rods, reels, and various types of bait and lures, to catch a wide variety of fish species. Fishing also plays a crucial role in many cultures and economies, providing food and employment opportunities. Additionally, it can be a way to connect with nature and enjoy the outdoors.
Using AI Prompt with logging mode enabled:
aiprompt.exe "Write few sentences about fishing" -u user -p pass -l
Output:
2025-01-23 14:37:01 [DEBUG] Parsing URL 2025-01-23 14:37:01 [DEBUG] Parsing URL done 2025-01-23 14:37:01 [DEBUG] Checking if URL fits HTTP or HTTPS scheme 2025-01-23 14:37:01 [DEBUG] Checking done 2025-01-23 14:37:01 [DEBUG] Creating Authorisation Header 2025-01-23 14:37:01 [DEBUG] Using Basic Authorisation Header 2025-01-23 14:37:01 [DEBUG] Creating Authorisation Header done! 2025-01-23 14:37:01 [DEBUG] Creating Request Body 2025-01-23 14:37:01 [DEBUG] Generating Request Body from prompt 2025-01-23 14:37:01 [DEBUG] Generating done 2025-01-23 14:37:01 [DEBUG] {"model":"GGUF_Model_1","messages":[{"role":"assistant","content":"You are an assistant."},{"role":"user","content":"Write few sentences about fishing"}]} 2025-01-23 14:37:01 [DEBUG] Request body done 2025-01-23 14:37:01 [DEBUG] Setting up request 2025-01-23 14:37:01 [DEBUG] Standard execution 2025-01-23 14:37:01 [DEBUG] HTTP Request: Method: POST, RequestUri: 'http://localhost:9509/api?command=chatgpt', Version: 1.1, Content: System.Net.Http.StringContent, Headers: { Authorization: Basic dXNlcjpwYXNz Content-Type: application/json; charset=us-ascii } 2025-01-23 14:37:01 [DEBUG] Content: {"model":"GGUF_Model_1","messages":[{"role":"assistant","content":"You are an assistant."},{"role":"user","content":"Write few sentences about fishing"}]} 2025-01-23 14:37:01 [DEBUG] Sending HTTP Request... 2025-01-23 14:37:07 [DEBUG] HTTP Request sent: Method: POST, RequestUri: 'http://localhost:9509/api?command=chatgpt', Version: 1.1, Content: System.Net.Http.StringContent, Headers: { Authorization: Basic dXNlcjpwYXNz Content-Type: application/json; charset=us-ascii Content-Length: 154 } 2025-01-23 14:37:07 [DEBUG] HTTP Response: 200 OK 2025-01-23 14:37:07 [DEBUG] { "id": "chatcmpl-MMWZSTEFYDOBXIBLKHHMXQNOOYTEH", "object": "chat.completion", "created": 1737643027, "model": "GGUF_Model_1", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "Fishing is a popular recreational activity that involves catching fish from various bodies of water such as rivers,lakes, and oceans. It can be done for sport, relaxation, or sustenance. Anglers use different techniques and equipment, includingrods, reels, and various types of bait and lures, to catch a wide variety of fish species. Fishing also plays a crucialrole in many cultures and economies, providing food and employment opportunities. Additionally, it can be a way to connectwith nature and enjoy the outdoors.", "refusal": null }, "logprobs": null, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 0, "completion_tokens": 0, "total_tokens": 0, "completion_tokens_details": { "reasoning_tokens": 0 } }, "system_fingerprint": "fp_f85bea6784" }
Troubleshooting with logging mode
Let's see an example to see how powerful logging mode is. Send a basic prompt with an incorrect configuration intentionally:
::Change the credentials to your HTTP user's username and password aiprompt.exe "Write few sentences about fishing" -u user -p pass -h http://localhost:9509
The configuration's URL is changed to an incorrect one, so you will get an error when running AI Prompt.
Output:
2025-01-23 13:03:53 [ERROR]: Unexpected response from server: 2025-01-23 13:03:53 [ERROR]:
The AI Prompt printed an error on the terminal, stating Unexpected response from server: with an empty response. To check what might be the issue, turn on logging mode:
aiprompt.exe "Write few sentences about fishing" -u user -p password -h http://localhost:9509 -l
Running aiprompt.exe with logging mode will show you why did you get an empty response:
Output:
2025-01-23 13:05:00 [DEBUG] Parsing URL 2025-01-23 13:05:00 [DEBUG] Parsing URL done 2025-01-23 13:05:00 [DEBUG] Checking if URL fits HTTP or HTTPS scheme 2025-01-23 13:05:00 [DEBUG] Checking done 2025-01-23 13:05:00 [DEBUG] Creating Authorisation Header 2025-01-23 13:05:00 [DEBUG] Using Basic Authorisation Header 2025-01-23 13:05:00 [DEBUG] Creating Authorisation Header done! 2025-01-23 13:05:00 [DEBUG] Creating Request Body 2025-01-23 13:05:00 [DEBUG] Generating Request Body from prompt 2025-01-23 13:05:00 [DEBUG] Generating done 2025-01-23 13:05:00 [DEBUG] {"model":"GGUF_Model_1","messages":[{"role":"assistant","content":"You are an assistant."},{"role":"user","content":"Write few sentences about fishing"}]} 2025-01-23 13:05:00 [DEBUG] Request body done 2025-01-23 13:05:00 [DEBUG] Setting up request 2025-01-23 13:05:00 [DEBUG] Standard execution 2025-01-23 13:05:00 [DEBUG] HTTP Request: Method: POST, RequestUri: 'http://localhost:9509/', Version: 1.1, Content: System.Net.Http.StringContent, Headers: { Authorization: Basic dXNlcjpwYXNz Content-Type: application/json; charset=us-ascii } 2025-01-23 13:05:00 [DEBUG] Content: {"model":"GGUF_Model_1","messages":[{"role":"assistant","content":"You are an assistant."},{"role":"user","content":"Write few sentences about fishing"}]} 2025-01-23 13:05:00 [DEBUG] Sending HTTP Request... 2025-01-23 13:05:00 [DEBUG] HTTP Request sent: Method: POST, RequestUri: 'http://localhost:9509/', Version: 1.1, Content: System.Net.Http.StringContent, Headers: { Authorization: Basic dXNlcjpwYXNz Content-Type: application/json; charset=us-ascii Content-Length: 154 } 2025-01-23 13:05:00 [DEBUG] HTTP Response: 404 NotFound 2025-01-23 13:05:00 [ERROR]: Unexpected response from server: 2025-01-23 13:05:00 [ERROR]:
As shown on Figure 34, after sending the HTTP Request to the AI Server, it responded with 404 Not Found. This means there is no network endpoint where we tried to send a request. Fix the issue by providing a correct URL:
aiprompt.exe "Write few sentences about fishing" -u user -p password -h http://localhost:9509/api?command=chatgpt
Let's see if it works now:
Output:
Role: assistant Content: Fishing is a popular recreational activity that involves catching fish from various bodies of water such as rivers, lakes, and oceans. It can be done for sport, relaxation, or sustenance. Anglers use different techniques and equipment, including rods, reels, and various types of bait and lures, to catch a wide variety of fish species. Fishing also plays a crucial role in many cultures and economies, providing food and employment opportunities. Additionally, it can be a way to connect with nature and enjoy the outdoors.
The output confirms that you have fixed the issue. Troubleshooting is easy by using logging mode, which makes AI Prompt a versatile tool which can be easily operated.
Why should I use Ozeki AI Prompt instead of Postman?
Ozeki AI Prompt is specifically designed and customized for efficient communication with HTTP AI APIs, making it a specialized tool for AI workflows, unlike Postman, which is a general-purpose API testing tool. This makes AI Prompt ideal for automation, real-time interaction through its interactive mode, and can be integrated into scripts or pipelines easily.
Can I use Ozeki AI Prompt on other operating systems?
Yes, AI Prompt supports Linux and MacOS as well. Check out these guides:
Can I use Ozeki AI Prompt with Online AI API providers such as OpenAI/ChatGPT or DeepSeek?
Yes, AI Prompt supports all OpenAI API compatible providers, including ChatGPT and DeepSeek. Check out these guides:
Conclusion
This guide has walked you through the essential steps for setting up and using AI Prompt with the Ozeki HTTP API. From downloading and configuring the tool to sending prompts, utilizing interactive mode, and customizing settings with command-line parameters and environment variables, you now have the knowledge to fully leverage AI Prompt’s capabilities.
Check out other Ozeki AI solutions as well: Docs
More information