- 5 network automation startups to watch
- 4 Security Controls Keeping Up with the Evolution of IT Environments
- ICO Warns of Festive Mobile Phone Privacy Snafu
- La colaboración entre Seguridad y FinOps puede generar beneficios ocultos en la nube
- El papel del CIO en 2024: una retrospectiva del año en clave TI
Rescuing a Linux system from near disaster
The more you know about how Linux works, the better you’ll be able do some good troubleshooting when you run into a problem. In this post, we’re going to dive into a problem that a contact of mine, Chris Husted, recently ran into and what he did to determine what was happening on his system, stop the problem in its tracks, and make sure that it was never going to happen again.
Disaster strikes
It all started when Chris’ laptop reported that it was running out of disk space–specifically that only 1GB of available disk space remained on his 1TB drive. He hadn’t seen this coming. He also found himself unable to save files and in a very challenging situation since it is the only system he has at his disposal and he needs the system to get his work done.
When he was prompted by the system to “Examine or Ignore” the problem, he chose to examine it. Looking around, he noticed that his /var/log directory had become extremely large. Examining the directory more closely, he saw that his syslog file had grown to 365GB. Imagine being Chris and looking at something like this:
$ ls -lh /var/log/syslog -rw-r----- 1 syslog adm 365G Jun 5 12:11 /var/log/syslog
Searching for help
Hunting around on the web, Chris found this post on stackoverflow that encouraged capping the size of the syslog file.
The first thing he did was run these three commands:
$ sudo su – # sudo > /var/log/syslog # sudo systemctl restart syslog
The first command allowed him to take on root privileges, the second emptied the syslog file on the system and the third restarted the syslog daemon so it would continue to collect information about what was happening on the system. He still needed to track down the culprit.
Next, he modified his logrotate settings (in the /etc/logrotate.d/syslog file) so the file could not become any larger than 1GB. He did this by adding the maxsize setting as pointed out in the lines below:
/var/log/syslog { rotate 7 daily maxsize 1G <== missingok notifempty delaycompress compress postrotate /usr/lib/rsyslog/rsyslog-rotate endscript }
The first line (rotate 7) ensures that seven generations of the syslog file will be retained along with the current one, but doesn’t resolve problems in which the current file grows to an enormous size in a single day. On a normal system, the collection of syslog files will look something like this when rotated daily:
$ ls -l /var/log/syslog* -rw-r----- 1 syslog adm 828674 Jun 10 08:00 /var/log/syslog -rw-r----- 1 syslog adm 2405968 Jun 9 16:09 /var/log/syslog.1 -rw-r----- 1 syslog adm 206451 Jun 9 00:00 /var/log/syslog.2.gz -rw-r----- 1 syslog adm 216852 Jun 8 00:00 /var/log/syslog.3.gz -rw-r----- 1 syslog adm 212889 Jun 7 00:00 /var/log/syslog.4.gz -rw-r----- 1 syslog adm 219106 Jun 6 00:00 /var/log/syslog.5.gz -rw-r----- 1 syslog adm 218596 Jun 5 00:00 /var/log/syslog.6.gz -rw-r----- 1 syslog adm 211074 Jun 4 00:00 /var/log/syslog.7.gz
The combination of “rotate 7” (keep seven generations) and “daily” (rotate every day) leaves you with a set of files like those shown. Adding the maxsize setting means that your logs will rotate daily or whenever they reach the size specified, so you might be rotating logs more than once a day. Given the 1G setting, however, you should never see the files using more than 1GB for the current and previous files and likely less than a tenth that size for the remainder of the logs since they’ll be compressed. This will ensure that the syslog files won’t likely use more than 3 GB in total–far smaller than Chris’ 365 GB. (You can get more detail on how log rotation works from this post.)
With the size of the syslog file constrained, Chris was ready to delve into the cause of the problem. First, he ran this command:
$ tail -f syslog
This allowed him focus on the bottom of the file, but also displayed additional lines as they were being added. A stream of messages including strings like “baloo_file.desktop[2982]: org.kde.baloo.engine:” quickly identified Baloo (the file indexing and file search framework for KDE Plasma) as the source of the problem.
Since Chris was using Ubuntu GNOME, he needed to look into why Baloo was running on his system at all. Then he recalled he had installed a file manager named Dolphin that might have brought Baloo along with it.
Using the balooctl command, he was able to verify that baloo was indeed running and stopped it using these commands as root:
# balooctl stop ; balooctl disable
Then he removed Dolphin (which Software Manager hadn’t helped with) using these commands:
$ sudo apt install -s dolphin $ sudo apt remove --purge libkf5balooengine5 $ sudo rm -rf .local/share/baloo/
Afterwards, Chris’ system was immediately back up to speed, and he had recovered 300GB of his disk space. After a little more house cleaning, clearing caches, removing no-longer-used apps, etc., Chris had recovered more than 400GB of drive space. He claims that now hislaptop runs as fast as it did when Ubuntu was first installed.
Note that some Linux systems use messages files instead of syslog files, and that others (like Fedora) now use the journalctl command to display data stored in files stored in the /var/log/journal directory.
Wrap-Up
A worrisome problem and one that made a Linux laptop almost completely unusable was resolved with good insight on how to free up some disk space and stop the disk from filling up, a quick analysis of the problem by reviewing the syslog file entries, a modification of log rotation settings and removing the system services that were causing the problem.
I should emphasize that Chris considers himself a Linux user, not a “techie”, and was grateful to track down and fix the problem himself with freely available help from other Linux users or, as Chris describes it, “genuine expertise explained in plain English for average people”. He stressed how important this is for him as a Linux user and how important he imagines this is for all of us.
Given Chris’ experience, maybe more of us should consider capping the size of our log files, monitoring disk-space usage, and never forgetting how much help is available for us online.
Copyright © 2021 IDG Communications, Inc.