Skip to content

Ollama Cloud Setup

Learn how to use PhotoPrism with Ollama Cloud to generate detailed captions and accurate labels for your pictures without running a local Ollama instance.

Step 1: Get an API Key

In order to use Ollama Cloud, you need an account at ollama.com and a valid API key, which you can generate at https://ollama.com/settings/keys.

Step 2: Configure Environment

Add the OLLAMA_BASE_URL and OLLAMA_API_KEY environment variables to the photoprism service in your compose.yaml (or docker-compose.yml) file, as shown in the example below.1

compose.yaml

services:
  photoprism:
    image: photoprism/photoprism:latest
    environment:
      OLLAMA_BASE_URL: "https://ollama.com"
      OLLAMA_API_KEY: "your-api-key"
      ...

With these variables set, PhotoPrism automatically uses the Ollama Cloud endpoint for all Ollama-based models configured in your vision.yml. You do not need to specify a Service.Uri or Service.Key in the model configuration, as they are resolved from the environment.

When OLLAMA_BASE_URL is set to https://ollama.com, PhotoPrism switches to cloud defaults automatically. An API key alone does not force cloud usage. Since the currently published Ollama Cloud model name is qwen3-vl:235b-instruct-cloud, we recommend setting it explicitly in vision.yml.

Step 3: Configure Models

Create a new vision.yml file in your config path (default: storage/config) or edit the existing file in the storage/config folder of your PhotoPrism instance, following the example below.

Since the service URI is resolved from OLLAMA_BASE_URL, you can omit the Service block:

vision.yml

Models:
- Type: labels
  Model: qwen3-vl:235b-instruct-cloud
  Engine: ollama
  Run: auto
  Service:
    Think: "false"
- Type: caption
  Model: qwen3-vl:235b-instruct-cloud
  Engine: ollama
  Run: auto
  Service:
    Think: "false"

Make sure the models you configure are available on Ollama Cloud. You can browse the list of supported cloud models to see which ones can be used. You do not need to pull them manually, cloud models are served remotely.

The optional Service.Think: "false" setting disables reasoning output for models that support it. This is often useful for captions and labels because it reduces latency and avoids spending output tokens on internal reasoning instead of the final result.

At the time of writing, PhotoPrism's internal Ollama cloud default in the application code is still qwen3-vl:235b-instruct, so explicitly setting Model: qwen3-vl:235b-instruct-cloud avoids ambiguity until that default is updated upstream.

Learn more ›

Scheduling Options

  • Run: auto (recommended) automatically runs the model after indexing is complete to prevent slowdowns during indexing or importing. It also allows manual and scheduled invocations.
  • Run: manual disables automatic execution, allowing you to run the model manually via photoprism vision run -m caption or photoprism vision run -m labels.

Learn more ›

Configuration Tips

PhotoPrism evaluates models from the bottom of the list up, so placing the Ollama entries after the others ensures Ollama is chosen first while the others remain available as fallback options.

Ollama-generated captions and labels are stored with the ollama metadata source automatically, so you do not need to request a specific source field in the schema or pass --source to the CLI unless you want to override the default.

Prompt Localization

To generate output in other languages, keep the base instructions in English and add the desired language (e.g., "Respond in German"). This method works for both caption and label prompts.

Step 4: Restart PhotoPrism

Run the following commands to restart photoprism and apply the new settings:

docker compose stop photoprism
docker compose up -d

You should now be able to use the photoprism vision CLI commands when opening a terminal, e.g. photoprism vision run -m caption to generate captions, or photoprism vision run -m labels to generate labels.

Learn more ›

Troubleshooting

Verifying Your Configuration

If you encounter issues, a good first step is to verify how PhotoPrism has loaded your vision.yml configuration. You can do this by running:

docker compose exec photoprism photoprism vision ls

This command outputs the settings for all supported and configured model types. Compare the results with your vision.yml file to confirm that your configuration has been loaded correctly and to identify any parsing errors or misconfigurations.

Performing Test Runs

The following terminal commands will perform a single run for the specified model type:

photoprism vision run -m labels --count 1 --force
photoprism vision run -m caption --count 1 --force

If you don't get the expected results or notice any errors, you can re-run the commands with trace log mode enabled to inspect the request and response:

photoprism --log-level=trace vision run -m labels --count 1 --force
photoprism --log-level=trace vision run -m caption --count 1 --force

  1. Unrelated configuration details have been omitted for brevity.