Nextcloud seems to have a bad reputation around here regarding performance. It never really bothered me, but when a comment on a post here yesterday talked about huge speed gains to be had with Postgres, I got curious and spent a few hours researching and tweaking my setup.
I thought I’d write up what I learned and maybe others can jump in with their insights to make this a good general overview.
To note, my installation initially started out with this docker compose stack from the official nextcloud docker images (as opposed to the AIO image or a source installation.) I run this behind an NGINX reverse proxy.
Sources of information
- Server tuning on Nextcloud Docs: Most of this are very basic things that are already taken care of in the docker image or in the proxy companion image I’m using. The one thing I haven’t tried and that comes up in other places, too, is using Imaginary for image preview generation.
- How to migrate Nextcloud 17 Database Backend from MySQL to postgreSQL
- Eking out some Nextcloud Performance mainly talks about using a socket connection for redis, but also mentions logging to syslog (have not found a good source of information for this), using postgres, using imaginary for image previews
Improvements
Migrate DB to Postgres
What I did first is migrate from maridb to postgres, roughly following the blog post I linked above. I didn’t do any benchmarking, but page loads felt a little faster after that (but a far cry from the “way way faster” claims I’d read.)
Here's my process
- add postgres container to compose file like so. I named mine “postgres”, added a “postgres” volume, and added it to depends_on for app and cron
- run migration command from nextcloud app container like any other occ command. The migration process stopped with an error for a deactivated app so I completely removed it, dropped the postgres tables and started migration again and it went through. after migration, check
admin settings/system
to make sure Nextcloud is now using postgres../occ db:convert-type --password $POSTGRES_PASSWORD --all-apps pgsql $POSTGRES_USER postgres $POSTGRES_DB
- remove old “db” container and volume and all references to it from compose file and run
docker compose up -d --remove-orphans
Redis over Sockets
I followed above guide for connecting to Redis with sockets with details as stated below. This improved performance quite significantly. Very fast loads for files, calendar, etc. I haven’t yet changed the postgres connection over to sockets since the article spoke about minor improvements, but I might try this next.
Hints
- the redis configuration (host, port, password, …) need to be set in
config/config.php
, as well asconfig/redis.config.php
- the cron container needs to receive the same
/etc/localtime
and/etc/timezone
volumes the app container did, as well as thevolumes_from: tmp
EDIT Postgres over Sockets
I’m now connecting to Postgres over sockets as well, which gave another pretty significant speed bump. When looking at developer tools in Firefox, the dashboard now finishes loading in half the time it did before the change; just over 6s. I followed the same blog article I did for Redis.
Steps
- in the compose file, for the db container: add volumes
/etc/localtime
and/etc/timezone
; adduser: "70:33"
; addcommand: postgres -c unix_socket_directories='/var/run/postgresql/,/tmp/docker/'
; add tmp container tovolumes_from
anddepends_on
- in nextcloud config.php, replace
'dbhost' => 'postgres',
with'dbhost' => '/tmp/docker/',
Outlook
What have you done to improve your instance’s performance? Do you know good articles to share? I’m happy to edit this post to include any insights and make this a good source of information regarding Nextcloud performance.
“Way faster” came from me :D
It is the complete package which makes it way faster…
Postgres, Redis, PHP Opcache, general PHP tweaks ( PHP.ini, child processes etc, use calculator ), HTTP-2 instead of 1.1
For HTTP-2, you can add this for Apache in your vhost:
Protocols h2 h2c http/1.1
For example:
ServerAdmin admin@server.com DocumentRoot /var/www/html/nextcloud/ ServerName my.domai.com Protocols h2 h2c http/1.1 ....
In NGNIX add this in a new line:
http2 on;
For example:
server { listen 80; listen [::]:80; server_name my.domain.com; http2 on; ....
Using NGINX over Apache did nothing for me, so I use Apache with PHP-FPM 8.3, because I am using it for Wordpress too, same for Redissockets. I would recommend not to use a docker container for Nextcloud. I don’t like it for Nextcloud and I don’t use docker for Wordpress. Docker has other use cases in my setup, but not those 2.
This is just my personal setup.
Maybe it can help someone :)
That makes sense. If you start out without any of those I’m sure it’s night and day.
Thanks for the additional input!
There are no slow nextclouds, only wrong configured ones ☝🏻😁
deleted by creator
They will be delighted to hear it.
Try MySQL instead of MariaDB. They have some performance tweaks in version 10 that aren’t present in MariaDB.
Also, tune your MySQL (or MariaDB) server. Make sure all tables use InnoDB. Enable the slow query log and analyze slow queries (there may be missing indices). If there’s a lot of unique queries, increase the query cache size.
The easy approach is to run MySQLTuner after the MySQL or MariaDB server has been up for at least a week, and go through its suggestions.
There shouldn’t be a significant difference in performance between PostgreSQL and MySQL/MariaDB if both have been optimized. Out-of-the-box config isn’t ideal for a production system.
Depends on how you’re using it. You can wrong an absolutely insane amount of performance out of postgres that you cannot with MySQL.
I wonder how much next cloud leaves on the table?
Heads up, you can also get postgress to use a socket and mount that through for another speedup if you haven’t already
Yeah, I saw that but wanted to take it step by step as not to break everything all at once. 😉
You can use UNIX sockets with MySQL or MariaDB too.
I’ve been a proponent here for a few months on using postgres/redis every time someone shits on NC for performance. While I agree the database change itself isn’t a huge improvement, it pays for itself long term in larger volume installs when you and your organization/group get using it heavily. The redis connected on socket like the AIO mastercontainer sets up is where the real juice comes from, but only on an install that gets used so it caches properly. The first time you fire it up, it’s pretty slow but as it gets used, things are much better.
I’m going to try this next week. My nextcloud instance is getting a bit sluggish lately.
Thanks for sharing it, really helpful post
Does Postgres really help that much? It runs fine for me with MariaDB
Very anecdotally, I saw a little speed improvement but not all that much. DB size increased a bit. I’ll be sticking with it for the time being because why not.
I wonder what performance impact there would be if you were to move pgsql onto bare metal with enough ram dedicated to caching all of the db data (think: i5 or i7 nuc). That’s going to be my next step with my homelab; I want to migrate everything to a single db host with a lot of RAM and M2 storage and avoid the db process replication I have going on. I have no performance complaints with NC currently, I’m running PHP cache and redis as well as image preview and imaginary.
I had been running Nextcloud on an old laptop using Ubuntu, but that machine died. I have a Windows PC originally built for gaming that I am considering using for Nextcloud. Anyone have any experience with NC and Windows? Thought on the DB switch on Windows?
I don’t think you’ll do yourself any favours setting it up on Windows directly. How about docker+wsl2?
I have docker on the machine now and thought I’d try that type of install first. Sorry, I’m not familiar with the abbreviation “wsl2”
it stands for Windows Subsystem for Linux. Here is a link on how to install it.
100% agree with tofubi, Docker on Windows is a form of self-abuse, like cutting yourself. It’s a train wreck for anything other than a little bit of testing for development work. You will come away with a bad taste in your mouth about Docker, I avoided containers for years because I started with them on Windows docker.
I’ve run a lot of different scenarios with docker, what I’ve come down to as the cleanest and easiest to maintain is Debian 12 with the Docker convenience script. It’s fast, hassle free, and doesn’t have a bunch of layers of weirdness like using Ubuntu Server with a docker snap that makes troubleshooting a nightmare.
for anything other than a little bit of testing for development work.
It’s really awesome for development work, though. Visual Studio has built-in Docker support, so I can run my app and its unit tests on both Windows and Linux (via Docker) at the same time on the same system during development.
This sounds interesting.
I use docker in vscode for latex. It saves me the trouble of having to install texlive on my system. I have a task defined that mounts my sources in and runs the compilation in the container.
Would love to hear about your work flow.
ive tried to get nextcloud working several times and it just seems to never work for some reason… maybe i should set it up on a pi ive got laying around instead of my main server lol
Have you tried the AIO method that’s now the primary supported docker install?
It’s really good, and I’ve set up and used NC in a variety of ways since about version 7.
im not sure / cannot recall. it’s been a few months since i last tried to install it and it kept erroring out. im definitely strongly considering looking back into it though, it’s just that reverse proxying to the container was a nightmare… it still haunts my config, lol
I use NPM and all I think I had to add to it was
client_body_buffer_size 512k; proxy_read_timeout 86400s; client_max_body_size 0;
in the Advanced config. I’d love to move to Traefik but I could not figure out how to make that work.
There were some other gotchas. If you run into something, ping me, I might remember if I encountered it and what I did.
My advice: use the nextcloud snap package. It’s seamless.
It is, in fact, the only Snap I’ve ever used which worked without issues
That being said, it’s kinda slow in some cases, but perfectly useable nonetheless
I know snap isn’t popular among Linux nerds, but I was really having issues with the AIO docker setup and at the time I didn’t have the time to troubleshoot/fight it. I needed to give my family a file drop link to share photos for a memorial service.
I figured, the snap package was recommended on their site, maybe it won’t be horrible. To my surprise it was incredibly easy, has been rock solid, never had performance issues, and it’s always up-to-date.
Snap may suck for some use-cases but this one seems to be right in it’s wheel house.
It also has an export/backup capability built in.
that is… surprising. not that i don’t believe you, snap just doesn’t have a good track record, lol. ill have to research if it’s feasible to run a snap package on a debian server, though.
Thanks. I didn’t realize syslog would help. Just configured it to send to my grafana/loki server. Not sure if it’s really helping, but seems like maybe it’s a bit faster. I’ve long since done everything listed here and more, but in the last couple months my nextcloud has seemed a bit sluggish for some reason.
Since i started using nixos i don’t have any problems with nextcloud 🙃
I only recently started using nix and NixOS. How’s the update process for nextcloud? Can you use the self updater?
In nixos you almost never use any “self” thing
You update everything with your whole system at once
Even the installed apps, the true nixos way to install them is through the configuration file
That makes sense, it does sound better to keep it within nixos! I’ve mostly been using nixos to bootstrap servers that run nomad+docker, so beyond the system-level config, I haven’t done a lot with additional software yet.