Skip to content

Docker

Docker is an Open Source container virtualization tool. It is ideal for running applications on any computer without extensive installation, configuration, or performance overhead.

We are aware Docker is not widely used by end users despite its many advantages. For this reason, we aim to provide native binaries for common operating systems at a later time.

Why are we using Docker?

Containers are nothing new; Solaris Zones have been around for about 15 years, first released publicly in 2004. The chroot system call was introduced during development of Version 7 Unix in 1979. It is used ever since for hosting applications exposed to the public Internet.

Modern Linux containers are an incremental enhancement. A main advantage of Docker is that application images can be easily made available to users via Internet. It provides a common standard across most operating systems and devices, which saves our team a lot of time that we can then spend more effectively, for example, providing support and developing one of the many features that users are waiting for.

Human-readable and versioned Dockerfiles as part of our public source code also help avoid "works for me" moments and other unwelcome surprises by enabling teams to have the exact same environment everywhere in development, staging, and production.

Last but not least, virtually all file format parsers have vulnerabilities that just haven't been discovered yet. This is a known risk that can affect you even if your computer is not directly connected to the Internet. Running apps in a container with limited host access is an easy way to improve security without compromising performance and usability.

Running Docker Images

Assuming you have Docker installed and want to test Debian 12 "Bookworm", you can simply run this command to open a terminal:

docker run --rm -v ${PWD}:/test -w /test -ti debian:bookworm bash

This will mount the current working directory as /test. Of course, you can also specify a full path instead of ${PWD}.

The available Ubuntu, Debian and PhotoPrism images can be found on Docker Hub:

Additional packages can be installed via apt:

apt update
apt install -y exiftool libheif-examples

Continuous Integration / Deployment

Build and push of an updated container image to Docker Hub is automatically performed by Travis CI whenever develop is merged into master and the tests are all green. For that reason, we don't use semantic versioning for our binaries and container images. A version string might look like 181112-edc7c2f-Darwin-i386-DEBUG instead. Travis CI uses the photoprism/development image for running unit and integration tests on all branches and for pull requests, see Dockerfile.

Multi-Stage Build

When creating new images, Docker supports so called multi-stage builds, that means you can compile an application like PhotoPrism in a container that contains all development dependencies (like source code, debugger, compiler,...) and later copy the binary to a fresh container. This way we could reduce the compressed container size from ~1 GB to less than 200 MB. Most of that is used by Darktable, TensorFlow and Ubuntu 18.04. Our photoprism binary is smaller than 20 MB.

Example:

FROM photoprism/development:20181112 as build

# Build PhotoPrism
WORKDIR "/go/src/github.com/photoprism/photoprism"
COPY . .
RUN make all install DESTDIR=/opt/photoprism

# Base base image as photoprism/development
FROM ubuntu:18.04

WORKDIR /opt/photoprism

# Copy built binaries and assets to this image
COPY --from=build /usr/local/bin/photoprism /usr/local/bin/photoprism
COPY --from=build /opt/photoprism /opt/photoprism

# Expose HTTP port
EXPOSE 80

# Start PhotoPrism server
CMD photoprism start

Kubernetes

External Resources