Skip to content

Performance Tips

MariaDB

The InnoDB buffer pool serves as a cache for data and indexes. It is a key component for optimizing MariaDB performance. Its size should be as large as possible to keep frequently used data in memory and reduce disk I/O - typically the biggest bottleneck.

By default, the buffer pool size is between 128 MB and 512 MB, depending on which configuration example you use. You can change it with the --innodb-buffer-pool-size command parameter in the mariadb: section of your docker-compose.yml. M stands for Megabyte, G for Gigabyte. Do not use spaces.

If your server has plenty of physical memory, we recommend increasing the size to 1 or 2 GB:

services:
  mariadb:
    command: mysqld --innodb-buffer-pool-size=1G ...

As a rule of thumb, Innodb_buffer_pool_pages_free should never be less than 5% of the total pages. You can run the following SQL statement, for example using the mariadb command in a terminal, to display the number of free pages and other InnoDB-related status information:

SHOW GLOBAL STATUS LIKE 'Innodb_buffer%';

Advanced users may adjust additional parameters to further improve performance. Tools such as the mysqltuner.pl script can provide helpful recommendations for this.

Windows and macOS

If you are using Docker Desktop on Windows or macOS, remember to increase the total memory available for Docker services. Otherwise, they may run out of resources and cannot benefit from a larger cache size. In case PhotoPrism and MariaDB are running in a virtual machine, its memory size should be increased as well. Restart for changes to take effect.

Migration from SQLite

After migrating from SQLite, it is possible that columns do not have exactly the data type they should have or that indexes are missing. This can lead to poor performance. For example, MariaDB cannot process rows with text columns in memory and always uses temporary tables on disk if there are any.

If this is the case, please make sure that your migrated database schema matches that of a fresh, non-migrated installation, e.g. by re-running the migrations manually in a terminal with the photoprism migrations ls and photoprism migrations run [id] subcommands.

Windows

Solving Windows-Specific Issues

Storage

Local Solid-State Drives (SSDs) are best for databases of any kind:

  • database performance extremely benefits from high throughput which HDDs can't provide
  • SSDs have more predictable performance and can handle more concurrent requests
  • due to the HDD seek time, HDDs only support 5% of the reads per second of SSDs
  • the cost savings from using slow hard disks are minimal

Switching to SSDs makes a big difference, especially for write operations and when the read cache is not big enough or can't be used.

Never store database files on an unreliable device such as a USB flash drive, SD card, or shared network folder. These may also have unexpected file size limitations, which is especially problematic for databases that do not split data into smaller files.

Memory

Indexing large photo and video collections benefits from plenty of memory for caching and processing large media files. Ideally, the amount of RAM should match the number of physical CPU cores. If not, reduce the number of workers as explained below.

Especially the conversion of RAW images and the transcoding of videos are very demanding. High-resolution panoramic images may require additional swap space and/or physical memory above the recommended minimum.

RAW image conversion and TensorFlow are disabled on systems with 1 GB or less memory. We take no responsibility for instability or performance problems if your device does not meet the requirements.

Server CPU

Last but not least, performance can be limited by your server CPU. If you've tried everything else, then only moving your instance to a more powerful device or cloud server may help.

Be aware that most NAS devices are optimized for minimal power consumption and low production costs. Although their hardware gets faster with each generation, benchmarks show that even 8-year-old standard desktop CPUs like the Intel Core i3-4130 are often many times faster:

CPU Benchmark

Legacy Hardware

It is a known issue that the user interface and backend operations, especially face recognition, can be slow or even crash on older hardware due to a lack of resources. Like most applications, PhotoPrism has certain requirements and our development process does not include testing on unsupported or unusual hardware.

In many cases, performance can be improved through optimizations. Since these can prove to be very time-consuming and cost-intensive in practice, users and developers must decide on a case-by-case basis whether this provides sufficient benefit in relation to the costs or whether the use of more powerful hardware is faster and cheaper overall.

We kindly ask you not to open a problem report on GitHub Issues for poor performance on older hardware until a full cause and feasibility analysis has been performed. GitHub Discussions or any of our other public forums and communities are great places to start a discussion.

That being said, one of the advantages of open-source software is that users can submit pull requests with performance and other improvements they would like to see implemented. This will result in a much faster solution than waiting for a core team member to remotely analyze your problem and then provide a fix.

Troubleshooting

If your server runs out of memory, the index is frequently locked, or other system resources are running low:

Other issues? Our troubleshooting checklists help you quickly diagnose and solve them.

You are welcome to ask for help in our community chat. Sponsors receive direct technical support via email. Before submitting a support request, try to determine the cause of your problem.