Linux

systemd Demystified: Services, Timers, and Targets

Master systemd unit files, dependency ordering with Wants/Requires/After, timer units as cron replacements, socket activation, and diagnostic tools like journalctl and systemd-analyze blame.

A
Abhishek Patel9 min read

Infrastructure engineer with 10+ years building production systems on AWS, GCP,…

systemd Demystified: Services, Timers, and Targets
systemd Demystified: Services, Timers, and Targets

The Init System You Can't Avoid

If you manage Linux servers, systemd is the init system running your services, mounting your filesystems, and managing your logs. It replaced SysVinit on every major distribution years ago, and whether you love it or resent it, understanding systemd services, timers, and targets is non-negotiable for anyone doing serious Linux administration.

Most engineers interact with systemd through two commands -- systemctl start and systemctl restart -- and never look deeper. That's fine until a service won't start, a timer fires at the wrong time, or you need to debug boot order dependencies. This guide covers unit files, dependency ordering, timer units, socket activation, and the diagnostic tools that make systemd manageable.

What Is systemd?

Definition: systemd is a Linux init system and service manager that starts and supervises processes, manages dependencies between services, handles logging via journald, and provides a unified interface for controlling system state through units, targets, and timers.

systemd is PID 1 on modern Linux distributions. It's the first process the kernel starts, and everything else -- your SSH daemon, your web server, your database -- is a child of systemd. It reads declarative unit files that describe what to run, when to run it, and what it depends on.

Unit Files: The Core Abstraction

Everything in systemd revolves around unit files. A unit is a configuration file that describes a resource systemd manages. The most common types:

  • .service -- a process or daemon
  • .timer -- a scheduled trigger (replacement for cron)
  • .socket -- a socket for on-demand activation
  • .target -- a group of units (like a runlevel)
  • .mount -- a filesystem mount point
  • .path -- watches a filesystem path for changes

Unit files live in three locations, in order of priority:

LocationPurposeSurvives Updates
/etc/systemd/system/Admin overrides and custom unitsYes
/run/systemd/system/Runtime units (transient)No
/usr/lib/systemd/system/Package-installed defaultsOverwritten on updates

Pro tip: Never edit files in /usr/lib/systemd/system/ directly. Use systemctl edit myservice to create an override file in /etc/systemd/system/myservice.service.d/override.conf. Your changes survive package updates, and you only need to specify the directives you're changing.

Anatomy of a Service Unit

Here's a real-world service unit for a Node.js application:

[Unit]
Description=My Node.js Application
Documentation=https://example.com/docs
After=network.target postgresql.service
Wants=postgresql.service

[Service]
Type=simple
User=app
Group=app
WorkingDirectory=/opt/myapp
Environment=NODE_ENV=production
EnvironmentFile=/opt/myapp/.env
ExecStartPre=/usr/bin/npm run db:migrate
ExecStart=/usr/bin/node dist/server.js
ExecStop=/bin/kill -SIGTERM $MAINPID
Restart=on-failure
RestartSec=5
StartLimitBurst=3
StartLimitIntervalSec=60

[Install]
WantedBy=multi-user.target

How to Write a systemd Service Unit File

  1. Create the unit file at /etc/systemd/system/myapp.service
  2. Set the [Unit] section -- add a Description, and use After= and Wants= to declare dependencies on other services
  3. Configure the [Service] section -- set Type (usually simple or forking), specify User/Group, define ExecStart with the full binary path, and set Restart=on-failure
  4. Add [Install] -- use WantedBy=multi-user.target so the service starts on boot
  5. Reload and enable -- run systemctl daemon-reload then systemctl enable --now myapp

Key Directives Explained

ExecStartPre runs before the main process. Use it for database migrations, config validation, or directory creation. If it fails, the service won't start.

ExecStart is the main process. For Type=simple, systemd considers the service started as soon as this process is forked. Always use absolute paths.

ExecStop defines how to stop the service. If you don't specify it, systemd sends SIGTERM followed by SIGKILL after a timeout (90 seconds by default).

Dependency Ordering: Wants, Requires, After

Dependency management is where people get confused. systemd has two separate concepts that work together:

DirectivePurposeEffect on Failure
Wants=Soft dependency -- try to start the other unitThis unit still starts if the wanted unit fails
Requires=Hard dependency -- the other unit must succeedThis unit fails if the required unit fails
After=Ordering -- wait for the other unit to finish startingOnly controls order, not dependency
Before=Ordering -- start this unit before the otherOnly controls order, not dependency

Watch out: Wants= and Requires= don't imply ordering. If you write Wants=postgresql.service without After=postgresql.service, both services start simultaneously. You almost always want both Wants= and After= together.

Targets: The Modern Runlevels

Targets group units together and represent system states. They replace the old SysVinit runlevels:

TargetOld RunlevelDescription
poweroff.target0System halt
rescue.target1Single-user mode
multi-user.target3Multi-user, no GUI
graphical.target5Multi-user with GUI
reboot.target6System reboot
# Check the current default target
systemctl get-default

# Set default target to multi-user (no GUI)
systemctl set-default multi-user.target

# Switch to rescue mode immediately
systemctl isolate rescue.target

Timer Units: Replacing cron

systemd timers are cron's replacement, and they're better in every measurable way: they support calendar expressions, monotonic intervals, randomized delays to avoid thundering herds, and they log through journald so you can actually see what happened.

Do systemd timers replace cron?

Yes, for most use cases. systemd timers offer better logging integration through journald, dependency management, and the ability to catch up on missed runs. Cron still works and is simpler for one-off scheduling, but timers are the modern approach on systemd-based systems and give you systemctl list-timers for visibility into what's scheduled.

A timer needs two files: the timer unit and the service unit it triggers.

# /etc/systemd/system/backup.timer
[Unit]
Description=Daily database backup

[Timer]
OnCalendar=*-*-* 02:00:00
RandomizedDelaySec=900
Persistent=true

[Install]
WantedBy=timers.target
# /etc/systemd/system/backup.service
[Unit]
Description=Database backup job

[Service]
Type=oneshot
User=backup
ExecStart=/opt/scripts/backup.sh
# Enable and start the timer
systemctl enable --now backup.timer

# List all active timers
systemctl list-timers --all

# Manually trigger the service (for testing)
systemctl start backup.service

Persistent=true means if the system was off when the timer should have fired, it runs immediately on the next boot. RandomizedDelaySec=900 adds up to 15 minutes of jitter, which prevents every server in your fleet from hammering the backup target at exactly 2 AM.

Socket Activation

Socket activation lets systemd listen on a port and only start the actual service when a connection arrives. This speeds up boot time and means services that are rarely used don't consume resources until needed.

# /etc/systemd/system/myapp.socket
[Unit]
Description=My App Socket

[Socket]
ListenStream=8080
Accept=no

[Install]
WantedBy=sockets.target

When a connection arrives on port 8080, systemd starts the corresponding myapp.service and passes it the socket file descriptor. The service handles the request without any dropped connections.

Diagnostic Tools

journalctl: Reading Logs

# Logs for a specific service
journalctl -u myapp.service

# Follow logs in real time
journalctl -u myapp.service -f

# Logs since last boot
journalctl -b

# Logs from a time range
journalctl --since "2024-01-15 10:00" --until "2024-01-15 11:00"

# Only errors and above
journalctl -u myapp.service -p err

# JSON output (for parsing)
journalctl -u myapp.service -o json-pretty

systemctl: Service Management

# Detailed status with recent logs
systemctl status myapp.service

# List all failed units
systemctl --failed

# Show all dependencies of a unit
systemctl list-dependencies myapp.service

# Check if a unit is enabled
systemctl is-enabled myapp.service

systemd-analyze: Boot Performance

# Total boot time
systemd-analyze

# Time each unit took to start (sorted by duration)
systemd-analyze blame

# Critical chain -- the longest dependency path
systemd-analyze critical-chain

# Generate an SVG boot chart
systemd-analyze plot > boot-chart.svg

Pro tip: systemd-analyze blame is the first command to run when boot times are slow. It shows which units took the longest to start. Pair it with systemd-analyze critical-chain to see which slow unit is actually on the critical path versus just slow but parallel.

Hosting and Server Costs

Managing systemd units is a core skill for any VPS or dedicated server. Here's what you'll pay for entry-level servers where you'll put this knowledge to use:

ProviderPlanMonthly CostBest For
HetznerCX22~$4.35Best value in Europe
DigitalOceanBasic Droplet$4.00Developer-friendly UI
VultrCloud Compute$2.50Cheapest entry point
AWS EC2t4g.micro~$6.10AWS ecosystem integration
LinodeNanode$5.00Simple, predictable pricing

Frequently Asked Questions

What is the difference between Wants and Requires in systemd?

Wants= is a soft dependency -- systemd will try to start the other unit, but your service starts regardless of whether it succeeds. Requires= is a hard dependency -- if the required unit fails to start, your service also fails. In practice, Wants= is safer for most cases because it prevents cascading failures when a dependency has a transient issue.

How do I make a systemd service start on boot?

Run systemctl enable myservice. This creates a symlink in the target directory (usually multi-user.target.wants/) so systemd knows to start it during boot. To start it immediately and enable it in one command, use systemctl enable --now myservice. You'll also need a [Install] section with WantedBy=multi-user.target in your unit file.

Why does my service show "activating" and then fail?

This usually means ExecStartPre or ExecStart is exiting with a non-zero code. Check journalctl -u myservice -n 50 for the actual error. Common causes: wrong binary path (systemd requires absolute paths), missing environment variables, or the User specified in the unit file doesn't have permission to run the command.

How do systemd timers differ from cron jobs?

Timers integrate with journald for logging, support Persistent=true to catch up on missed runs, allow randomized delays to prevent thundering herds, and can depend on other units. Cron is simpler for basic scheduling but offers no logging, no dependency management, and no visibility via systemctl list-timers.

What does systemd-analyze blame show?

systemd-analyze blame lists every unit started during boot, sorted by how long each took to initialize. It's the first tool to reach for when investigating slow boot times. Note that units starting in parallel may show high times individually but not affect total boot duration. Use critical-chain to see the actual critical path.

Can I use systemd without root access?

Yes. systemd supports user-level services via systemctl --user. Place unit files in ~/.config/systemd/user/ and manage them with systemctl --user start myservice. User services run as your user and can start on login. Enable lingering with loginctl enable-linger username to keep them running after you log out.

Conclusion

systemd isn't going anywhere. Learn unit files, understand the dependency model (Wants + After for most things), use timers instead of cron, and lean on journalctl and systemd-analyze for debugging. The investment pays off every time you deploy a new service, diagnose a boot issue, or schedule a maintenance task. Start by converting one of your existing cron jobs to a timer unit -- it'll take ten minutes and you'll immediately see the benefits of integrated logging and failure handling.

A

Written by

Abhishek Patel

Infrastructure engineer with 10+ years building production systems on AWS, GCP, and bare metal. Writes practical guides on cloud architecture, containers, networking, and Linux for developers who want to understand how things actually work under the hood.

Related Articles

Enjoyed this article?

Get more like this in your inbox. No spam, unsubscribe anytime.

Comments

Loading comments...

Leave a comment

Stay in the loop

New articles delivered to your inbox. No spam.