bookmark_borderSetup SeaFile with OnlyOffice on one Docker host

A self hosted cloud is something which is interesting for a lot of people nowadays. Whether it be that they don’t like to share their data with a big company or for learning effects just to name a few reasons.
In this article we take a look at SeaFile which is a open source cloud software solution. To get a more modern touch, we will also integrate OnlyOffice into SeaFile. This will allow us to do collaboration office document editing, including docx, pptx and xlsx files (Microsoft Office). Don’t be afraid because of the length of this article. If you’re already more familiar with techniques like SeaFile, Docker and OnlyOffice you can easily skip to the chapters where it start to be interesting for you.

Why SeaFile and not NextCloud?

I’ve decided to go with SeaFile instead of NextCloud about a year ago. The simple reason was speed! SeaFile is amazingly fast in synchronization. Even if you have thousands of little files, they all will be synced fast.
Another big reason to go with SeaFile is delta synchronization. Delta synchronization basically means that SeaFile is able to determine which part of your file has beed changed and is then only uploading the changed part of this file. This mechanism comes hand in hand with the overall synchronization speed, too. SeaFile does also offer end-to-end encryption. This means you can encrypt your files locally and upload them to the server. This makes the files unreadable for third parties.
If you want to extend your Cloud with things like contact and calendar synchronization, then NextCloud might be the better choice for you as there is no option to integrate such things in SeaFile. SeaFile is a pure cloud solution for files, but it is doing this one key feature almost perfect.

Requirements

First of all, I assume that you have already Docker installed on your server system. If not, check out the manual from Docker here: official Docker installation instructions. Your (server) system should have at least 4GB of RAM and 5 GB of free hard disk space in order to install the needed container files. In addition to these 5 GB you need the desired amount of free hard disk space for your files to be stored.
If you don’t have a server or system at home which you can use, you can try a hosted server at one of the many providers out there. I can recommend Netcup as a provider. You can find their offers here: netcup VPS. You can also use one of the following codes at the checkout. These codes are 5€ vouchers for new customers at netcup:

  • 36nc15574364944
  • 36nc15574364943
  • 36nc15574364942
  • 36nc15574364941
  • 36nc15574364940

Hint: These vouchers are part of a affiliate system which means a get a small portion as a reward.

Create a Docker network

The first step we do is to create a Docker network interface. We need a extra interface in order to assign static private IP addresses to the containers. Why we need static IP addresses anyway? This will be explained later on in this how to.
The following command creates a new Docker network interface which will be named „cloud“ with a maximum of 253 usable IP addresses:

user@server:~$ sudo docker network create --subnet=192.168.20.0/24 cloud

The IP addresses within this network are going from 192.168.20.1 to 192.168.20.254. The first address is being used by the Docker host itself (in this example 192.168.20.1). We could cut down the usable addresses even more but for easiness, let’s stick with this solution right now.

Setup the SeaFile Docker container

Now that we have created a dedicated Docker network we can start to create the SeaFile Docker container. In order to store the database as well as the client files on the Docker host, we have to create a folder which we then mount in the container. In this example I’ve decided to store all the SeaFile related files on /srv/seafile on the host system. So we have to create this directory first:

user@server:~$ sudo mkdir /srv/seafile

As next we can finally create the Docker container. The following command creates a SeaFile Docker container with the latest version available detached (-d switch). The rest of the parameters are explained below:

user@server:~$ sudo docker run -d --name seafile --net cloud --ip 192.168.20.2 -e SEAFILE_SERVER_LETSENCRYPT=true -e SEAFILE_SERVER_HOSTNAME=cloud.myserver.com -e SEAFILE_ADMIN_EMAIL=me@mymail.com -e SEAFILE_ADMIN_PASSWORD='MyPW' -v /srv/seafile:/shared -p 80:80 -p 443:443 seafileltd/seafile:latest
The parameters in detail
  • –name: The name of the container. In this example the container will be called seafile.
  • –net: The network were the container will be attached to. We use the previously created cloud network here.
  • –ip: The fixed IP address within the cloud network.
  • -e SEAFILE_SERVER_LETSENCRYPT: This is a parameter which will be bypassed to the containers setup script (which runs the first time when you create a new container). It tells the container that SSL encryption (HTTPS) will be enabled and we want to get a free certificate from Let’s Encrypt. Keep in mind that the DNS resolution must work in order to get a valid Let’s Encrypt certificate! This certificate will be valid 90 days. To extend it, simply recreate the container (look at the section Extend Let’s Encrypt certificate below to find out more).
  • -e SEAFILE_SERVER_HOSTNAME: Also a parameter which is bypassed to the setup script. Sets the actual wanted hostname within the SeaFile Docker container.
  • -e SEAFILE_ADMIN_EMAIL: Another setup parameter. Sets the the mail address of the SeaFile administrator. This will also be the login for you when the container is up and running.
  • -e SEAFILE_ADMIN_PASSWORD: The password of the SeaFile administrator.
  • -v /srv/seafile:/shared: We mount the previously created directory /srv/seafile on the host system in /shared within the Docker container. If we wouldn’t mount /shared, all files would be gone after restarting or recreating the SeaFile container.
  • -p 80:80 and -p 443:443: We make the SeaFile container accessible from the world wide web over Port 443 and 80 which are the standard ports for HTTP and HTTPS. However, all unencrypted HTTP traffic will be automatically redirect to the encrypted way over Port 443 (HTTPS).
  • seafileltd/seafile:latest: This downloads the latest version of the SeaFile server from the official Docker Hub. Docker Hub is basically the package manager of Docker.

Don’t forget to adjust the parameters according to your needs. After a few minutes, your SeaFile installation will be ready to use. You can access it with your public domain or IP address, for e.g. https://cloud.myserver.com.

Setup the OnlyOffice Docker container

Now that SeaFile is running and accessible, we can make OnlyOffice ready for usage, too. As already stated, OnlyOffice is a editor for most of the Microsoft Office document formats including docx, pptx and xlsx. It’s open source and free to use. Before we start, we will create a directory where OnlyOffice can store it’s cache and other database related files. We will later mount that directory within the OnlyOffice Docker container just like we did with SeaFile:

user@server:~$ sudo mkdir /srv/onlyoffice
Install certbot and obtain a Let’s encrypt certificate

We’ve already secured SeaFile with the help of a Let’s encrypt certificate. Thus it’s just consequent to secure OnlyOffice with Let’s Encrypt as well. To do so we need a little tool which is called certbot. It does all the work for us. If you’re running a Ubuntu server version 18.04 or higher, you can easily install it like this:

user@server:~$ sudo apt install certbot

If you’re a Debian user, you have to use Backports in Debian 9 in order to install certbot. To enable backports in Debian 9, create a new file under the directory /etc/apt/sources.list.d/ named backports.list with the following content:

deb http://deb.debian.org/debian stretch-backports main contrib non-free

Save and close the file. Now we can install certbot as well:

user@server:~$ sudo apt update && sudo apt install -t stretch-backports certbot

Now that we have certbot installed, we can get another certificate. Warning: Don’t use the same domain name which you already used in SeaFile here! Otherwise your SeaFile certificate will be invalidated. The easiest way is to use another subdomain like office.myserver.com:

user@server:~$ certbot certonly --standalone -d office.myserver.com

Just like with SeaFile, ensure that the subdomain you’re using is pointing to the public IP address of your server. Otherwise you won’t get a certificate and certbot will return an error. If the process was successful, the certificate will be valid 90 days and you can find it under /etc/letsencrypt/live/office.myserver.com/cert.pem.
These files have to be copied so that the OnlyOffice container is later able to read them. We also have to extended the certificate with the Let’s Encryt chain. Otherwise SeaFile will not be able to validate the certificate later on which results in errors when opening documents with OnlyOffice:

user@server:~$ sudo cp /etc/letsencrypt/live/office.myserver.com/cert.pem /mnt/onlyoffice/data/certs/onlyoffice.crt
user@server:~$ sudo cp /etc/letsencrypt/live/office.myserver.com/privkey.pem /mnt/onlyoffice/data/certs/onlyoffice.key
user@server:~$ wget https://letsencrypt.org/certs/lets-encrypt-x3-cross-signed.pem.txt
user@server:~$ sudo sh -c "cat lets-encrypt-x3-cross-signed.pem.txt >> /mnt/onlyoffice/data/certs/onlyoffice.crt"
user@server:~$ rm lets-encrypt-x3-cross-signed.pem.txt
Setup OnlyOffice

We’re now finally able to start OnlyOffice. For this use the following command (the parameters are explained below again):

user@server:~$ sudo docker run -i -t -d -p 8443:443 --name onlyoffice --net cloud --ip 192.168.20.3 --add-host cloud.myserver.com:192.168.20.2 -v /srv/onlyoffice/logs:/var/log/onlyoffice -v /srv/onlyoffice/data:/var/www/onlyoffice/Data onlyoffice/documentserver:latest
The parameters in detail
  • -p 8443:443: We make OnlyOffice accessible over port 8443 from connections over the world wide web. You don’t have to use the port 8443 but a higher, more uncommon port is recommended. It’s something like a additional „security factor“. Higher, more unusual port numbers are more likely not be considered by bots.
  • –net and –ip: Again we use the dedicated cloud Docker network and set a fixed IP address.
  • –add-host: This is a key parameter here. We are using OnlyOffice and SeaFile on a single host system. This means that OnlyOffice has to be able to contact SeaFile over port 9000. By default, that’s not possible. As a workaround (which is also more secure than allow SeaFile to communicate over port 9000 with the world wide web) we use this parameter to add an entry to the /etc/hosts file on the OnlyOffice container. This entry is pointing cloud.myserver.com to the internal Docker IP address.
  • -v: Same as with the creation of the SeaFile container. Files like logs will be stored there. This also contains the certificates we’ve created the step before.
  • onlyoffice/documentserver:latest: Like with the SeaFile container creation, we use the latest OnlyOffice documentserver version available at the Docker Hub.

After a few minutes, the OnlyOffice instance should be ready to use. You can check this if you visit the IP address or the domain name of your instance / server. For e.g. https://office.myserver.com:8443. If you see Document Server is running, you’re good to go.

Get SeaFile and OnlyOffice to work together

Now that SeaFile and OnlyOffice are running we have to link them in order to get them working together. To do so, we have to tell SeaFile how our OnlyOffice instance can be reached. So open the file /srv/seafile/seafile/conf/seahub_settings.py on your docker host and add the following lines at the end of this file:

ENABLE_ONLYOFFICE = True
VERIFY_ONLYOFFICE_CERTIFICATE = False
ONLYOFFICE_APIJS_URL = 'https://office.myserver.com:8443/web-apps/apps/api/documents/api.js'
ONLYOFFICE_FILE_EXTENSION = ('doc', 'docx', 'ppt', 'pptx', 'xls', 'xlsx', 'odt', 'fodt', 'odp', 'fodp', 'ods', 'fods')
ONLYOFFICE_EDIT_FILE_EXTENSION = ('docx', 'pptx', 'xlsx')

You have to edit the ONLYOFFICE_APIJS_URL to the actual URL of your OnlyOffice instance. Save and close the file. Even we force SeaFile to not verify the OnlyOffice certificate (parameter VERIFY_ONLYOFFICE_CERTIFICATE = False) it does check the certificate anyways. Thus we have added the complete Let’s Encrypt chain a few steps before.
That’s already it. Restart the two containers and you should be able to create / modify documents in SeaFile:

user@docker:~$ sudo docker stop onlyoffice
user@docker:~$ sudo docker stop seafile
user@docker:~$ sudo docker start onlyoffice
user@docker:~$ sudo docker start seafile
Lack of security token support in SeaFile

Security is really important these days. While SeaFile only works if you logged yourself in, OnlyOffice has implemented another approach. The so called „security token“. A security token is used by the cloud software solution (like NextCloud or Seafile) to authenticate against the OnlyOffice instance.
While NextCloud does support this security token mechanism, sadly SeaFile doesn’t. In addition to this you can’t simply use a firewall to block all traffic from outside to your OnlyOffice instance because the client also have to able to communicate with it. Because of this I recommended (as a little bit of blurring) the usage a more uncommon port. I know that this doesn’t have something to do with making something „secure“, however it’s at least something for now. Implementing security token support is on the road map of the SeaFile developers.
For clarification, not using security tokens doesn’t mean everyone can access your files. It just means that everyone who is able to find out the actual address of your OnlyOffice instance can use your instance for editing documents as well.

Updating the containers

Updating the SeaFile as well as the OnlyOffice container is really simple. To do so, stop the container, delete it and recreate it. The already explained latest tag at the end of the image name makes sure that you always pull the latest stable version available. So for example to update Onlyoffice:

user@docker:~$ sudo docker stop onlyoffice
user@docker:~$ sudo docker rm onlyoffice
user@docker:~$ sudo docker run -i -t -d -p 8443:443 --name onlyoffice --net cloud --ip 192.168.20.3 --add-host cloud.myserver.com:192.168.20.2 -v /srv/onlyoffice/logs:/var/log/onlyoffice -v /srv/onlyoffice/data:/var/www/onlyoffice/Data onlyoffice/documentserver:latest

Updating SeaFile does look similar:

user@docker:~$ sudo docker stop seafile
user@docker:~$ sudo docker rm seafile
user@docker:~$ sudo docker run -d --name seafile --net cloud --ip 192.168.20.2 -e SEAFILE_SERVER_LETSENCRYPT=true -e SEAFILE_SERVER_HOSTNAME=cloud.myserver.com -e SEAFILE_ADMIN_EMAIL=me@mymail.com -e SEAFILE_ADMIN_PASSWORD='MyPW' -v /srv/seafile:/shared -p 80:80 -p 443:443 seafileltd/seafile:latest

Please keep in mind that you have to modify the docker create command again so that it fits to your initially create command. In order to keep track if there are any updates available, you could write a Cronjob which does the steps above automatically (something I really wouldn’t recommend due to possibly update failures). Or you subscribe to the developers newsletter so that you get notice when a new version is released. Sadly the Docker Hub doesn’t have a functionality in sending mails when an image is being updated (as of May 2019).

Extending Let’s Encrypt certificates

Extending the validity of Let’s Encrypt certificates for SeaFile is really simple. Just recreate the container like you would update the container. SeaFile checks the certificate automatically at the recreation process and extends the validity of the certificate.
For the manually created Let’s Encrypt certificate for OnlyOffice we have to stop the SeaFile container (because this container is using the port 443 and 80 which we need for certbot). Than we simply repeat the commands like the first time we got the certificate. This includes the copy commands as well as adding the Let’s Encrypt chain, too. While renewing the certificate, certbot may asks you if you want to keep the actual certificate or if you want to replace it. This message only pops up if you’re actual certificate isn’t already expired. You can surpass this message with the –force-renew parameter:

user@docker:~$ sudo certbot certonly --force-renew --standalone -d office.myserver.com

You can wrap these steps in a Cronjob of course as well. This would allow you to extend the certificate without even touching your system.

A few words about firewalling

If you’re using a firewall (which I highly recommend) make sure that port 80 and 443 (HTTP / HTTPS) are accessible from the world wide web. Also make sure that the port you’ve set for the OnlyOffice instance is accessible from outside (in this article this port was 8443). The client has to be able to communiate with OnlyOffice, not just the SeaFile server.

SeaFile with OnlyOffice: A great couple

As you can see, thanks to Docker making and maintaining your own, self-hosted cloud is as easy as it never was. However, if you think that it is still to difficult for you to manage such a solution and thus you want to stick to a centralized hosted solution like Dropbox, Google Drive or OneDrive, consider at least to encrypt the files you store at their servers. For your own privacy and security. Solution likes Boxcryptor (free with the basic version) or Cryptomator (open source and free to use) are making it very easy to encrypt your files at cloud providers.

Further links

 

bookmark_borderDLNA server with MiniDLNA under Linux / Raspberry Pi

DLNA is a great service. With a DLNA server you can distribute videos, music or pictures to almost every Smart TV and / or set top box like an Amazon Fire TV. With DLNA you don’t have to bother if you’re TV supports the given file format. DLNA does cover this for you. One of the DLNA services which is easy to install, configure and use is MiniDLNA. This article shows you how to setup a MiniDLNA server under Linux / Raspberry Pi with a few simple steps.

Which hardware to use?

The good thing is, that you don’t have to use an Intel / AMD based machine to stream Full HD over DLNA. Even a Raspberry Pi with an external USB hard drive attached is capable of streaming Full HD movies over a Gigabit Ethernet. If you want to build yourself a DLNA Raspberry Pi server, I would recommend the following hardware:

Whether you go with the Raspberry Pi setup or not, ensure that you have a hard drive which is big enough to store your media files. As the Linux distribution of choice I recommend you Ubuntu or Debian (this tutorial is also written for Debian and Ubuntu). If you’re going with a Raspberry Pi setup, check out Raspbian (which is a Debian made for the Raspberry Pi). To setup your Raspberry Pi with Raspbian, you can checkout the Raspberry Pi image creation tutorial from the Raspberry Pi Foundation.

Why MiniDLNA as the DLNA server software of choice?

Besides MiniDLNA there are plenty of other services available. One of the biggest solutions are MediaTomb and Twonky. Both are the opposite of MiniDLNA. They are coming with complex and more powerful configurations tools. At the same time they are way more resource hungry. MiniDLNA works with a „keep-it-simple“ method. You basically just have to install the service and tell MiniDLNA where the media files you want to stream are located at.
Besides the „keep-it-simple“ factor, MiniDLNA is also a very resource saving solution, as already mentioned. This comes hand in hand with the resources limits a Raspberry Pi is giving us. However, even if you’re going to install a MiniDLNA server on a Intel Core i7, a straight forward easy to install / use solution is always the one you should consider first in my humble opinion.

Install MiniDLNA

The Raspbian, Debian and Ubuntu package repositories are already providing a ready to go MiniDLNA package. With that being said, the following command installs the latest available MiniDLNA package onto your system:

user@raspberrypi:~$ sudo apt-get update && sudo apt-get install minidlna

Depending on your internet speed, the download and installation of the MiniDLNA package should be done within a minute or two.

Configure MiniDLNA

At this point of the tutorial I assume that your (external USB) hard drive is already formatted and filled with the media you want to share over DLNA. To give you a example which is as most as accurate as possible, I also assume that your hard drive is already mounted on your Linux machine under /mnt/usb. If your hard drive is mounted at a different location, simply replace /mnt/usb with the mount point you had chosen.
The configuration file for MiniDLNA is simple. While we could dive deeper into the configuration parameters, we want to keep it as simple as possible as well. The only two parameters which are interesting for our setup for now are media_dir and user. To set these two configuration parameters, open the configuration file with your editor of choice and go on reading this article. The configuration file is located at /etc/minidlna.conf.

Start MiniDLNA as a non-root user

By default MiniDLNA starts it process with the root user. While this makes things easier, it’s a security issue which should be fixed. To do so, scroll down in the MiniDLNA configurations file and search for the following lines:

# Specify the user name or uid to run as (root by default).
# On Debian system command line option (from /etc/default/minidlna) overrides this.
#user=minidlna

Remove the starting hash from the user line. This tells the MiniDLNA Daemon to start the process as the user minidlna. The user minidlna was already created by installing MiniDLNA two steps before.

Add media directories to MiniDLNA

MiniDLNA supports audio, picture and video files. You don’t have to store all files on one single hard disk to share them over MiniDLNA. However, you have to configure a media directory per storage. You can do this in the MiniDLNA configuration file, too:

# Path to the directory you want scanned for media files.
#
# This option can be specified more than once if you want multiple directories
# scanned.
#
# If you want to restrict a media_dir to a specific content type, you can
# prepend the directory name with a letter representing the type (A, P or V),
# followed by a comma, as so:
# * "A" for audio (eg. media_dir=A,/var/lib/minidlna/music)
# * "P" for pictures (eg. media_dir=P,/var/lib/minidlna/pictures)
# * "V" for video (eg. media_dir=V,/var/lib/minidlna/videos)
# * "PV" for pictures and video (eg. media_dir=PV,[...]
media_dir=/var/lib/minidlna

As you can see, in the standard configuration file there is already a media directory configured. However, that’s just an example and you have to change this to the actual directory where your media files are stored. As an example, a setup with three media directories could could look like this:

media_dir=/mnt/usb/audio
media_dir=/mnt/usb/video
media_dir=/mnt/usb/picture

After you’ve added all the desired media directories, save and close the configuration file. To finally apply the changes to the MiniDLNA server, you have to restart the service:

user@server:~$ sudo systemctl restart minidlna

The first scan process could take some minutes. When you’re copying / moving additional files over time into these directories, MiniDLNA will find them automatically. Look at the webinterface if you want to know if the scanning processes is finished (go to the next chapter to find out how to access the MiniDLNA webinterface).

Webinterface

The MiniDLNA service comes with a small webinterface. This webinterface is just for informational purposes. You will not be able to configure anything here. However, it gives you a nice and short information screen how many files have been found by MiniDLNA. MiniDLNA comes with it’s own webserver integrated. This means that no additional webserver is needed in order to use the webinterface.
To access the webinterface, open your browser of choice and either enter the IP address or the hostname of the server / Raspberry you’re want to connect to, followed by the port 8200. For e.g.: http://raspberrypi:8200:

DLNA server - MiniDLNA status page
MiniDLNA status page

As you can see, I’m only streaming video files over my MiniDLNA setup. In the upper table you can see that my MiniDLNA Raspberry setup is ready to stream 1108 video files on demand. The Connected clients table lists the actual connected clients. In this list I see devices like my Smart TV, my Playstation and many others. Even though a lot of these clients aren’t streaming right now, they keep an active connection to the MiniDLNA server. When they start to stream some files, you will see the actual connections in the last cell of the second table.

The actual streaming process

This paragraph is just a short overview how a connection from a client to the configured and running MiniDLNA server could work. In this scenario we simply use a computer which is in the same local area network than the server. As the client software we use the Video Lan Client. Simple, robust, cross-platform and open source. After starting VLC, go to the playlist mode by pressing CTRL+L. You will now see on the left side a category which is called Local Network. Click on Universal Plug’n’Play which is under the Local Network category. You will then see a list of available DLNA service within your local network. In this list you should see your DLNA server. Navigate through the different directories for music, videos and pictures and select a file to start the streaming process:

DLNA server VLC stream
The MiniDLNA server was recognized by VLC (click to enlarge)

This is just an example of how to connect to your MiniDLNA server with a desktop client. VLC is also available for Android devices. Using MiniDLNA with VLC on an Android device even allows you to use the Chromecast to cast a music file, series of pictures or videos to your TV. However, if you have a Smart TV, most of them can connect to DLNA services directly.

Start, Stop and restart MiniDLNA

Starting, stopping or restarting the MiniDLNA service is „business-as-usual“. But just for the records, here are the commands:

user@server:~$ sudo systemctl start minidlna
user@server:~$ sudo systemctl stop minidlna
user@server:~$ sudo systemctl restart minidlna

Conclusion

Setting up your own DLNA server is really easy. If you use a Raspberry Pi in combination with a USB hard disk, you have a cheap but solid and flexible open source based solution. You are not forced to use a prebuild NAS appliance which maybe limits you in the maximal size of the hard disk or the file formats you want to use. Also, installing and configuring your own DLNA solution is a good learning experience. So what you’re waiting for. Start streaming your own movies, pictures and music via DLNA. And if you have any questions or you just want to let me know how your own DLNA setup looks like: Leave a message in the comments below 🙂

Further links

bookmark_borderTime-controlled shutdown/reboot of Linux or Windows

Time-controlled shutdowns or reboots are needed many times. For example you want to go to sleep. However, Steam is still updating your installed games. You want to have your Steam games ready updated when you’re waking up again. You look into your Steam download queue and you find out that it just needs two more hours. With a time-controlled shutdown you can go to sleep without interrupting this update process. This short article describes how to shutdown or restart your Linux or Windows machine by a given time.

Time-controlled shutdown/reboot on Linux

Shutdown

The good thing here is that every Linux distribution can execute this commands. They are the same on every distribution and every system. So, with the following command you tell your Linux distribution to shutdown your computer after 2 hours automatically:

user@machine:~$ sudo shutdown -h 120

The command shutdown speaks for itself. With the parameter -h you tell the shutdown command to power off the computer. The alternative would be halt. Still today some PCs aren’t turning off the power when they receive a halt. With the poweroff signal however, the PC is totally powering off. The number, after the -h switch, is the time in minutes when the PC should power off from now on. In this example this would be 120 minutes. You could also directly enter a time here. In the following example the PC would power off at 21:45:

user@machine:~$ sudo shutdown -h 21:45

And if you want to shutdown your machine right now, you can use the word NOW:

user@machine:~$ sudo shutdown -h NOW
Reboot

With the same shutdown command you can also do a reboot. To do so, simply replace the -h switch with -r. The following example reboots the computer after 120 minutes (two hours):

user@machine:~$ sudo shutdown -r 120

You can also enter a time here as well:

user@machine:~$ sudo shutdown -r 21:45

And as you may have seen the pattern, the following command restarts the PC right now:

user@machine:~$ sudo shutdown -r NOW
Cancel an active shutdown or reboot

If you entered a time-controlled shutdown or reboot and you want to cancel it for whatever reason, simply use the shutdown command with the -c switch like this:

user@machine:~$ sudo shutdown -c

Time-controlled shutdown/reboot on Windows

In order to do a time-controlled shutdown or reboot on Windows, we need a terminal window. This can be easily opened with pressing the keyboard combination Windows+R. In the following window enter cmd and click OK:

Time-controlled shutdown start cmd
The run window

 
Afterwards you will see a console terminal which looks like this:
Time-controlled shutdown cmd started
A command line in Windows 7

Shutdown

For a time-controlled shutdown under Windows, there is (just like in Linux) the tool shutdown available. The following command would shutdown your Windows PC in 2 hours:

C:\Users\testuser> shutdown /s /t 1200

Like in Linux, the /s switch tells your PC to shutdown / power off. The /t switch tells the shutdown command in how many seconds the PC should shutdown. If you want to shutdown your PC „immediately“, simply omit the /t switch like this:

C:\Users\testuser> shutdown /s

The PC will then shutdown within the next 60 seconds (which is the default value when executing the shutdown command without a timer under Windows).

Restart

For restarting, replace the /s switch with the /r switch. For e.g., this will trigger a reboot after two hours:

C:\Users\testuser> shutdown /r /t 1200

Or to trigger a reboot right now (60 seconds):

C:\Users\testuser> shutdown /r
Cancel an active shutdown or reboot

If you entered the wrong time for the shutdown / reboot or if you simply changed your mind, enter the following command to cancel the active shutdown / reboot:

C:\Users\myuser> shutdown /a

Conclusion

For a lot of people, shutting down or rebooting the PC at a specific time is a very needed feature. I’m the best example for this kind of person. Since I’ve moved to my new living place, the internet is only a twentieth of were it was before. This means that downloading games through Steam are time and bandwidth consuming. But starting the download right before I go to bed and use the shutdown command to plan a shutdown helps a little bit. Over night my bandwidth can be fully used by the Steam so that nobody gets distracted. When the download is finished, my system is shutting down which saves energy and protects the hardware. So knowing how to shutdown your PC at a specific time is a „must know“. And now you know as well 🙂
Do you have any thoughts about this topic you want to share? Are you actively using time-controlled shutdown or reboots? Let me know in the comments below.

Further links

bookmark_borderHow to setup a 7 Days to Die server under Linux

Setting up a 7 Days to Die server under Linux isn’t as difficulty as you may think. The Steam developer Valve is providing a tool which makes it easier, even for people who aren’t that familiar with the Linux command line. This article is covering how you setup your own 7 Days to Die server under a Ubuntu or Debian installation.

About 7 Days to Die

7 Days to Die is a zombie apocalypse survival crafting game which has been initially released in December 2013. The game is still in a „Early Access“ stage. 7 Days to Die is similar to Minecraft in the way it is played, but provides better graphics and a different scenario. There are also some quests included like treasure hunting. In the latest version (as of written in June 2018) the developer The Fun Pimps also integrated vehicles into the game. Like in Minecraft, the real fun with 7 Days to Die starts when you start playing with friends. Surviving, building and creating in a random generated world can bring you hours of fun and a long lasting motivation to go on in the things you wanna achieve.

Install requirements

On a freshly Debian or Ubuntu installation, we need to install some requirements in the first place. With the following command we will update the package repositories and install the necessary requirements:

user@machine:~$ sudo apt-get update && sudo apt-get install screen wget

Adding 32-Bit libraries

The Steam command line tool is only available as a 32-Bit program. As of today most of the systems are 64-Bit based. Ubuntu for e.g. isn’t even supporting 32-Bit anymore. If you’re running an 32-Bit system, you can skip this part. To find out if you have a 32-Bit system installed, just issue the following command on your system:

user@machine:~$ arch

If the output of this command is i386 or i586, you have a 32-Bit system. If it’s x86_64 you’re using a 64-Bit system. In this case you have to issue the following two commands as well:

user@machine:~$ sudo dpkg --add-architecture i386
user@machine:~$ sudo apt-get update && sudo apt-get install lib32gcc1

This installs the lib32gcc1 file which is a 32-Bit library. This library is needed by the Steam command line tool. If this library isn’t installed, the following commands will fail.

Download and extract Steam

As next, we acquire the Steam command line tool for Linux machines in order to download the server files via the Steam network. Before that we create a separate folder (which is called steamcmd) were we are going to download the Steam command line tool to:

user@machine:~$ mkdir steamcmd
user@machine:~$ cd steamcmd
user@machine:~/steamcmd$ wget http://steamcdn-a.akamaihd.net/client/installer/steamcmd_linux.tar.gz

The wget command is now downloading the Steam Linux command line tool. After the download is finished, we can extract this tool (it is compressed right now) like this:

user@machine:~/steamcmd$ tar -xvzf steamcmd_linux.tar.gz

Updating the Steam tool

You can now start updating the Steam tool. Execute the following command to start the update:

user@machine:~/steamcmd$ ./steamcmd.sh

This process will take some time. In the meanwhile, you will see a lot of output on the console like this:

Redirecting stderr to '/home/user/.steam/logs/stderr.txt'
ILocalize::AddFile() failed to load file "public/steambootstrapper_english.txt".
[ 0%] Checking for available update...
[----] Downloading update (0 of 13.028 KB)...
[ 0%] Downloading update (38 of 13.028 KB)...
[ 0%] Downloading update (38 of 13.028 KB)...
[ 0%] Downloading update (38 of 13.028 KB)...
[ 0%] Downloading update (38 of 13.028 KB)...
[...]

When the process is finished you will see a output which looks like this:

Steam Console Client (c) Valve Corporation
-- type 'quit' to exit --
Loading Steam API...OK.
Steam>

The last line Steam> tells you that the Steam command line tool is actually opened and Steam waits for you to enter a command.

Login into Steam

When using the Steam command line tool, you have to login just like with the graphical desktop version. Enter the command login followed by your Steam username to login. Steam will then ask you for your password and Two Factor Authentication (if enabled). When you enter your password there will be no input shown. But this is normal, just like in almost every Linux / UNIX software:

Steam> login <USERNAME>
Logging in user '<USERNAME>' to Steam Public...
password:
Enter the current code from your Steam Guard Mobile Authenticator app
Two-factor code:XXXXX
Logged in OK
Waiting for user info...OK

Download and install the 7 Days to Die server files

Now that you’re logged in, you can finally start downloading and installing your 7 Days to Die server. With the first of the following two commands we enforce Steam to install the 7 Days to Die server files into the directory 7dtd_server. The second one is starting the actual installation process of the server files. The number 294420 is the ID for the 7 Days to Die server files:

Steam> force_install_dir 7dtd_server
Steam> app_update 294420

Alternatively, if you want to use the latest BETA of the dedicated server, us the following two commands instead:

Steam> force_install_dir 7dtd_server
Steam> app_update 294420 -beta latest_experimental

This will download the latest BETA experimental branch the developers are providing to the public audience.
You can see again an output which will show you the actual progress:

Update state (0x5) validating, progress: 2.19 (75656327 / 3459239207)
Update state (0x61) downloading, progress: 0.21 (7340032 / 3459239207)
Update state (0x61) downloading, progress: 1.02 (35228112 / 3459239207)
Update state (0x61) downloading, progress: 2.75 (95064875 / 3459239207)
[...]

The app_update commands can take even longer than the Steam update process. It basically depends on your internet connection speed. After this command was successful, you will get yet again another message about it:

Success! App '294420' fully installed.

You can now quit the Steam command line tools by simply entering quit. Congratulations, you successfully downloaded and installed a 7 Days to Die server under Linux.

Configuring your server

You have a lot of options to configure your 7 Days to Die server to your wishes. The file to do so is the serverconfig.xml which is in the same directory as your 7 Days to Die server files. So change into the 7 Day to Die server directory:

user@machine:~/steamcmd$ cd 7dtd_server

Within this directory, there is already an example serverconfig.xml. While this file should work already, you should take at least some tweaks. To edit this file you can use the editor nano:

user@machine:~/steamcmd/7dtd_server$ nano serverconfig.xml

With CTRL+W you can search for a string. With CTRL+O you can save the file. And with CTRL+X you close the editor. The following parameters are a suggestion how you should modify your serverconfig.xml.
[table id=1 /]
You can always come back and change the parameters to your desire. However, you always have to restart the server after each change. For a full list of all possible values, check out this list.

Starting the server

Puh, lot of stuff right? But now the time has come to start your server. In order to have your server running constantly, you have to use screenscreen is a tool which runs programs even while you logged out. But first, ensure that you’re still in the 7dtd_server directory and enter the following commands:

user@machine:~/steamcmd/7dtd_server$ screen -S 7dtd
user@machine:~/steamcmd/7dtd_server$ ./7DaysToDieServer.x86_64 -logfile output.log -configfile=$HOME/steamcmd/7dtd/serverconfig.xml -dedicated

Your server will now start. This can take up to 5 minutes. You get no output when the server is ready. If you want to know more about what is hapenning, take a look at the output.log file within your 7dtd_server directory. To close the screen session (and therefore let the server run in the background), press CTRL+A followed by the D key for detaching. You can now close your remote session to your server without shutting down your 7 Days to Die server itself. If you want to attach to the screen session again, simply enter this command:

user@machine:~$ screen -r 7dtd

You can now start 7 Days to Die on your desktop / gaming machine, click on Connect To Server and enter either die IP address / name of your server (FQDN) and click on the connect symbol or search for your server name in the upper left (if you set your server to public at the step before):

7 Days to Die server connect
Connection field in 7 Days to Die

If you can’t connect at this point, you may have to check your firewall or the firewall on your server (if you use one). On default, the 7 Days to Die server listens on port 26900. This port has to be accessible from the internet.

Stopping the server

To stop your server, simply resume your screen session like this:

user@machine:~$ screen -r 7dtd

And press CTRL+C afterwards. It can take a few seconds up to a minute before the server shuts down.

How to become an admin?

In order to become an admin, you have to tell your server your Steam ID. Otherwise the server can’t decide which player really should get admin rights. To get your Steam ID, simply open Steam, click on your profile and copy the long number within the URL:

7 Days to Die server Steam ID
Steam ID

On your server, change into the following directory and create a file called serveradmin.xml. Open it afterwards with an text editor (we use nano again in this case):

user@machine:~$ cd $HOME/.local/share/7DaysToDie/Saves/
user@machine:~/.local/share/7DaysToDie/Saves$ touch serveradmin.xml
user@machine:~/.local/share/7DaysToDie/Saves$ nano serveradmin.xml

Enter at least the following lines:

<adminTools>
  <admins>
    <admin steamID="12345YourSteamID" permission_level="0" />
  </admins>
</adminTools>

Don’t forget to change the value 12345YourSteamID to your Steam ID! Press CTRL+O to save and CTRL+X to close the file. Afterwards, stop and start the 7 Days to Die server and that’s it. You could even create moderators and give each of them specific rights. However, this would definitely burst this article. If you want to know more about the possible rights management on a 7 Days to Die server, check out this link.

Last words …

As you can see, with a little bit of work you can setup your own 7 Days to Die server on a cheap Linux VPS for example. Of course you could also go with a hosted one and don’t bother about all the setup stuff, but this is way more costly. And if you’re interested in working with Linux machines, this is a good way to learn a little bit about how to maintain and setup server software under Linux in general. If you have any comments, wishes or notes, please let me know down below in the comments.

bookmark_borderMonit: An easy to use monitoring solution for Linux

Besides Nagios, Icinga and check_mk there are some other, more slimmer tools to monitor servers. Especially if you are a private person and you want to monitor your vServer, Raspberry Pi or whatever, you may want to use a smaller and easier monitoring solution than those big three.
This article is about monit. Monit is one of these simple monitoring tool. But just because the configuration is more simple, that doesn’t mean hat you are limited in the ways you monitor your servers with Monit.

How does monit works? And why not just use Nagios?

monit works differently than Nagios or Icinga. While Nagios, Icinga and check_mk needs a monitoring server which connects in given time periods to the machines it checks, monit doesn’t need this kind of server in order to do these checks. So basically that means that wherever you install monit, it checks locally and reports the results via mail.
In the first moment this sounds like a disadvantage whether it be due to reliability or security, but think about it for a second: What does Nagios, Icinga or check_mk basically do? They ping the destination machine and if this ping is successful it opens up an NRPE or SSH connection and executes the given CPU, Ram, service checks and so on, locally. The console output is then used for analysis. If the check exceeds it’s given limit, Nagios / Iciniga / check_mk creates an E-Mail and send it to the user /group of users which is / are responsible for this service. So basically the server executes local commands like monit. The initiator however is not locally, it’s a different (remote) server.
You now may think „but how do I even test if the server is available?“. Well, you just setup another monit instance which is checking if the destination is pingable, just like with Nagios / Icinga / check_mk.
A real benefit for monit is it’s easy configuration and syntax. While you have to dig through a bunch of files in order to create a simple check in Nagios, monit allows you to simply put one (human readable) configuration file in the correct directory and the check is ready to use. In combination with a configuration management like Ansible (I’ve already created an article about Ansible a long time ago) you have an easy central to use monitoring tool with automatically distributed configuration files.

Installation of monit

Monit is available for most Linux distribution. With the following command you install monit on a Ubuntu / Debian machine:

user@machine:~$ sudo apt-get update && sudo apt-get install monit

Or if you’re an openSUSE user, you can install monit like this:

user@machine:~$ sudo zypper ref && sudo zypper in monit

That’s basically it. Compared to Nagios or Icinga, you don’t need an installed Apache web server. Monit comes with it’s own built-in web service. In order to check if monit is running, you can use systemctl to do so:

user@machine:~$ sudo systemctl status monit
● monit.service - LSB: service and resource monitoring daemon
 Loaded: loaded (/etc/init.d/monit; generated; vendor preset: enabled)
 Active: active (running) since Wed 2017-10-11 11:47:39 CEST; 3 months 13 days ago
 Docs: man:systemd-sysv-generator(8)
 Tasks: 2 (limit: 4915)
 CGroup: /system.slice/monit.service
 └─18228 /usr/bin/monit -c /etc/monit/monitrc
Dez 26 06:25:02 machine systemd[1]: Reloading LSB: service and resource monitoring daemon.
Dez 26 06:25:02 machine monit[15742]: Reloading daemon monitor configuration: monit.
Dez 26 06:25:02 machine systemd[1]: Reloaded LSB: service and resource monitoring daemon.
Jan 10 06:25:02 machine systemd[1]: Reloading LSB: service and resource monitoring daemon.
Jan 10 06:25:02 machine monit[15692]: Reloading daemon monitor configuration: monit.
Jan 10 06:25:02 machine systemd[1]: Reloaded LSB: service and resource monitoring daemon.

In order to start, stop or restart the service, use these following commands:

user@machine:~$ sudo systemctl start monit
user@machine:~$ sudo systemctl stop monit
user@machine:~$ sudo systemctl restart monit

Enable the monit built-in web server

If you’re an Debian / Ubuntu user, monit will store it’s main configuration under /etc/monit/monitrc. This files is really well documented. However, I want to give a short brief about how to enable the built-in web server which is one of the most important features.
You can find the following block (at the time of writing) at line 155:

# set httpd port 2812 and
# use address localhost # only accept connection from localhost
# allow localhost # allow localhost to connect to the server and
# allow admin:monit # require user 'admin' with password 'monit

If you uncomment this section the built-in monit web server will be started and will listen on port 2812. The second line sets the network address where the web server instance is listening on. If it only listens on localhost you will not be able to connect from any other host. For e.g. if your server has the IP address 192.168.0.2 you can replace localhost with this IP address. This will get the monit built-in web server to listen on this network interface.
The third line regulates which other hosts are allowed to establish a connection to the web server. You can define single IP addresses or even whole subnets (like 192.168.1.0/24). Just like before, simply replace localhost with the IP address or subnet which you want to allow to connect to the web server.
The fourth line defines a user with a password which will be able to login. monit does not allow an anonymous login which is good from a security point of view obviously. In this example the username would be admin with the password monit.
When you’re done setting these lines, restart the monit service:

user@machine:~$ sudo systemctl restart monit

You can now visit the monit web interface of your server. In this example the address would be something like http://192.168.0.2:2812 or just use the FQDN: http://my.monit.fqdn:2812.
Besides using the web interface, you can also check your services on the command line. To do so you also have to have the web interface enabled, because the command line simply gets an HTTP output and process it. To get a summary of all your monitored processes you can enter the following command:

user@machine:~$ sudo monit status

Enable mail notifications

Nagios or Icinga can send you a mail every time when a host or a service of this host went down. monit also offers this feature. To enable mail notifications, open once more the monit configuration file (/etc/monit/monitrc) and search for the following lines:

## Set the list of mail servers for alert delivery. Multiple servers may be
## specified using a comma separator. If the first mail server fails, Monit
# will use the second mail server in the list and so on. By default Monit uses
# port 25 - it is possible to override this with the PORT option.
#
# set mailserver mail.bar.baz, # primary mailserver
# backup.bar.baz port 10025, # backup mailserver on port 10025
# localhost # fallback rela

As you can see, you can set multiple mail servers which monit should try to connect each time a notification is about to be send. The downside here is that monit does not support authentication against mail servers. This basically means that you can’t just enter public mail servers like GMail. If you want to use those service you have to have a local mail server setup. You can configure the mailserver as a satellite / client to keep it more simple. If you’re mail server is up and running, you can simply set the mail server in the monitrc file like this:

set mailserver localhost

If you have access to a mail server which doesn’t require authentication, you can replace localhost with this mail server of course.
Now that you’ve set the mail server, you also have to enter the recipient for the notification mails. Search for the following line in the monitrc file:

# set alert sysadm@foo.bar # receive all alerts

Uncomment this line and replace sysadm@foo.bar with the recipient of your choice. You can also define which kind of notification are going to be send to this mail address. However, this is more advanced configuration which would burst this articles length.
When everything set, restart monit and you will now get a mail each time a service status changes:

user@machine:~$ sudo systemctl restart monit

Some examples for monitoring services

For monitoring new programs or services, you have to create a new file under the subdirectory /etc/monit/monitrc.d/. Simply name this file like the service which is going to be monitored. The following examples are showing how to monitor services, programs and system resources with monit:

Monitoring a running program / service or service without a PID file
check process TeamSpeak
  matching "ts3"

In this example we simple check if a process called „ts3“ is running on the machine. You can use ps ax on the command line by yourself on the target machine to see if the process is running. That’s basically what monit is doing. The name of this check is TeamSpeak as seen in the first line. You can change this to whatever you want.

Monitoring a program / service with a PID file
check process mysqld with pidfile /var/run/mysqld/mysqld.pid
  start program = "systemctl start mysql"
  stop program = "systemctl stop mysql"
  if failed host localhost port 3306 protocol mysql with timeout 15 seconds for 3 times within 4 cycles then restart
  if failed unixsocket /var/run/mysqld/mysqld.sock protocol mysql for 3 times within 4 cycles then restart
  if 5 restarts with 5 cycles then alert

In this specific example we monitor a MySQL instance installed on the system. If the service crashes, monit tries three times within four cycles to restart the service. For this restart monit uses the two defined commands which are defined in start program and stop program. In this case monit checks the unixsocket as well as a network connection to get in touch with MySQL before it trys to restart the service.
If after four cycles (which includes restarting the service after each cycle in this case) the service is still unreachable for monit, monit runs into the fifth cycle. This last cycle is defined with „alert“. The administrator, which has been declared in the monitrc file, will get a mail. If there is another competent person defined for this specific service, this person will be mailed as well. The name of the service in this example is mysqld.

Check for disk usage
check device root with path /
  if SPACE usage > 90% then alert

With this check, we check the root file system. If the used space in percentage is over 90, monit will send out an alert via mail. It would also be possible to check against the used inodes of a partition as well. If you have more than just one partition, you have to create a check for each partition of want to monitor.

Check for ram, swap and CPU usage
check system my.machine.fqdn
  if memory usage > 90% then alert
  if cpu usage > 90% for 4 cycles then alert
  if swap usage > 95% for 4 cycles then alert

In this example we check the memory, CPU and swap usage by the system. This example is similar to the disk usage. If the memory usage is higher than 90% of the overall memory of this system, monit will send out an alert. If the CPU usage is over 90% for four cycles, monit will send an alert as well. The four cycles ensure that monit is not starting to report every little peak a CPU has. Same goes to SWAP. If 95% of the overall SWAP is in use, monit will report this when this has happened four times / cycles in a row. In the first line you would have to change my.machine.fqdn to the real FQDN your system has.

Check an external host (ping)
check host other.host.fqdn with address other.host.fqdn
  if failed icmp type echo
    count 3 with timeout 5 seconds
    2 times within 3 cycles
then alert

With this check, monit tries to ping the specified other.host.fqdn. With every ping command monit starts, it does three pings with a timeout of five seconds for each. If the ping fails two times within a three times cycle, monit sends out an alert that the host is unreachable / offline. In order to get this command up and running, you have to change other.host.fqdn to the desired host which you want to check.
For every check you change, add or remove, you have to restart the monit service:

user@machine:~$ sudo systemctl restart monit

You can now see the status of the services with the help of the web interface or with the command line:

user@machine:~$ sudo monit status

If you want to get the status of one single check, you can append the name of the check at the end of the status command. You have to use the name you defined within the configuration file for this check in your command. For e.g. to get the status of the first check TeamSpeak we’ve configured in these examples:

user@machine:~$ sudo monit status TeamSpeak

mmonit: If you need a central web instance

So while monit offers a web interface which you can use in order to get an overview about the actual checked services, I understand that literally nobody wants to connect to each single monit web instance if you have 30 hosts or more.
Now, what you could do is to write a wrapper script (with bash, Python, etc.) which helps you to get the actual status of all services on all hosts. However, a more comfortable and easier solution is to use mmonit. mmonit is written by the official monit developers and gathers all information from each single monit instance and is showing the collected data on one single page.
Sadly, mmonit isn’t free. You can grab a 30 day trial if you want to, but at the end you have to pay a small one time fee in order to use the software. The prices are starting as low as 65 € for 5 hosts and is dropping sharply as more hosts you’re using. With 50 hosts you also get support for just 349 €. Again, this is a one-time pay license. There are no more additional costs coming up. If you compare this to other solutions like check_mk, mmonit is really cheap.

The price table as of 24th January 2018

Sadly, I’m unable to tell you more about mmonit for now. I wasn’t in the need to have to use it for now. My own wrapper script is doing more than just fine. Right now I’m monitoring +30 hosts with monit. But maybe the future will push me to get a mmonit license 😉

Final words

Is monit a full replacement for Nagios or Icinga? I’m unable to say if monit really fits into server farms with more than 100 servers. But if you want to have a easy monitoring solution which works with scripts (and Nagios plugins if you configure your checks correctly), you will be most likely happy with monit.
I’ve seen some talks recently on StackOverflow about monit and while a lot of people are saying that they wouldn’t use monit in bigger clusters, they use it on certain spots. For e.g. somebody talked about a customer with round about 20 servers. He rolled out monit there because of the fast and easy configuration. In order to check all 20 hosts at once, he created a bash script which connects via SSH and Public Key authentication to each host and issues the command monit status. In order to keep track if the host is still accessible from outside he just pings from another host with a ping check configured on another monit instance.
As you can see, even if monit looks like it’s limited or „not as powerful as Nagios“, there are no borders. A lot of people use and love monit due to it’s easy configuration and flexibility. If you’re looking for a monitoring solution or for an Nagios / Icinga alternative, you should give monit a try.
Last hint: If you want to monitor a single server (for e.g. home server or a Raspberry Pi) monit is would you should look up before Nagios or Icinga IMHO. In a setup like this, monit is definitely about to shine.

Further links