Performance tuning is a critical part of Linux system administration. Whether you’re managing high-traffic servers, database clusters, or virtualization hosts, understanding how to optimize CPU, memory, disk, network, and kernel parameters ensures your system runs at peak efficiency. This guide covers core concepts, advanced techniques, monitoring tools, and practical configurations to help you tune Linux for production workloads.
1. CPU Performance Tuning
CPU performance reflects active processing time versus idle periods. The load average indicates queued processes over 1, 5, and 15 minutes.
- High load + low CPU usage → likely I/O bottleneck
- High CPU usage + low load → compute-intensive workload
Key Techniques
CPU Affinity: Bind processes to specific cores to improve cache locality:
# taskset -cp 0,1 1234 # Bind PID 1234 to cores 0 and 1
Process Priority: Adjust CPU scheduling with nice and renice:
# nice -n 10 process_name # Start process with lower priority
# renice -n 5 -p 1234 # Change priority of running process
Frequency Scaling: Use cpupower or tuned to set performance governors:
# cpupower frequency-set -g performance
Monitoring Tools
| Tool | Focus | Example |
|---|---|---|
| iostat | Device-level I/O | iostat -x 1 |
| iotop | Per-process I/O | iotop -o |
| dstat | Combined metrics | dstat -cdngy |
| nmon | All-metrics | nmon |
Example:
# mpstat -P ALL 1 5 # All CPUs, every second, 5 times
2. Memory Management
Efficient memory usage improves overall responsiveness. RAM handles active processes; swap acts as overflow when RAM fills.
Swappiness
Controls how aggressively Linux uses swap (0-100; default 60).
Lower value → favor RAM retention, higher → use swap more aggressively.
# sysctl vm.swappiness=10
# echo 'vm.swappiness=10' >> /etc/sysctl.conf
Monitoring Commands
# free -m # RAM and swap usage
# vmstat 1 # Memory, swap, paging stats every second
# htop / smem # Per-process memory usage
Page Cache
Linux caches disk reads in memory to improve I/O performance. Drop cache cautiously for testing:
# echo 3 > /proc/sys/vm/drop_caches
3. Disk I/O Optimization
Disk bottlenecks often limit performance more than CPU or memory.
I/O Schedulers
| Scheduler | Use Case |
|---|---|
| noop | SSDs (minimal overhead) |
| deadline | Latency-sensitive applications |
| cfq | Default, fair scheduling |
Check/change scheduler:
# cat /sys/block/sda/queue/scheduler
# echo deadline > /sys/block/sda/queue/scheduler
Mount Options
Reduce unnecessary writes:
# mount -o remount,noatime,nodiratime /var
Monitoring Tools
| Tool | Best For | Key Features | Example |
|---|---|---|---|
| top | Basic real-time view | CPU/memory per process, sortable | top |
| htop | Interactive monitoring | Tree view, mouse support, colors | htop |
| mpstat | Multi-CPU stats | Per-core usage, interval reports | mpstat -P ALL 1 5 |
| glances | All-in-one overview | Cross-platform, remote capable | glances |
| btop | Modern htop alternative | Graphical metrics, disks, net | btop |
4. Network Performance
Network tuning improves throughput, reduces latency, and avoids packet drops.
Advanced Sysctl Parameters
net.core.rmem_max=16777216
net.core.wmem_max=16777216
net.ipv4.tcp_congestion_control=bbr
net.ipv4.tcp_window_scaling=1
net.ipv4.tcp_fin_timeout=30
Tools for Monitoring
- iftop / nload → Bandwidth usage per connection/interface
- ss -s → Socket summaries
- ethtool -k eth0 → Offload capabilities (TSO, GRO)
- ip link set eth0 mtu 9000 → Enable jumbo frames (if supported)
Persist network tuning in /etc/sysctl.d/99-network.conf:
# sysctl --system
5. Kernel Tuning Essentials
Kernel parameters control system-wide limits, file handling, and networking:
fs.file-max=100000 # Max open files
kernel.pid_max=4194304 # Maximum PIDs
net.core.somaxconn=65535 # Max connection backlog
Tuned Profiles
Tuned simplifies performance tuning for production workloads:
# tuned-adm profile throughput-performance
# sysctl -p
6. Additional Tools and Monitoring
- s-tui → CPU frequency, temperature under stress
- iperf3 → Bandwidth testing (iperf3 -s server)
- sysbench → Benchmark CPU, memory, I/O:
# sysbench cpu --threads=8 run
7. Best Practices
Automate Monitoring: Log metrics regularly:
*/5 * * * * /usr/bin/mpstat 1 1 >> /var/log/perf.log
Benchmark Before/After: Always test performance improvements.
Monitor Bottlenecks:
CPU < 80%, load < number of cores
I/O wait < 5%
Memory free > 20%
Secure and Version Configs: Keep sysctl and tuned configurations under version control.
Gradual Changes: Apply tuning parameters incrementally to avoid system instability.
8. Pro Tips for Production Servers
- Use cgroups to limit CPU/memory for services.
- Leverage NUMA-aware tuning for multi-socket servers.
- For databases, isolate I/O-intensive workloads on separate disks.
- Combine systemd timers with monitoring scripts for automated performance reports.
- Consider container-aware tuning if running Kubernetes or Docker workloads.
Conclusion
Linux performance tuning is both science and art. By understanding CPU scheduling, memory management, I/O optimization, network parameters, and kernel tweaks, administrators can achieve predictable, stable, and high-performing systems. Regular monitoring, benchmarking, and incremental adjustments are key to maintaining peak performance under diverse workloads.
No comments:
Post a Comment