You know the feeling. Production is on fire, your Slack is lighting up, and you need answers from a 2GB log file right now. You don't have time to open a text editor. You don't have time to write a script. You need a one-liner that works on the first try.
The best DevOps engineers have a mental library of shell commands they can fire off without thinking. These aren't theoretical exercises -- they're the commands that save you at 2 AM during an incident.
Here are five bash one-liners that every DevOps engineer should have committed to muscle memory. Each one solves a real problem you'll face this week.
1. Parse and Aggregate Logs with awk
The problem: You need to find which API endpoints are getting hammered, and your only data source is an nginx access log.
awk '{print $7}' /var/log/nginx/access.log | sort | uniq -c | sort -rn | head -20
What it does:
awk '{print $7}'-- Extracts the 7th field (the request URI) from each log linesort-- Alphabetically sorts all URIs so duplicates are adjacentuniq -c-- Collapses duplicates and prefixes each line with a countsort -rn-- Sorts numerically in reverse (highest count first)head -20-- Shows only the top 20
Real-world output:
14523 /api/v1/users
8291 /api/v1/health
6104 /api/v1/orders
3887 /static/bundle.js
2901 /api/v1/auth/token
Now you know /api/v1/users is getting 14K hits. Time to dig into why.
Level it up: Filter by HTTP status code first to focus on errors:
awk '$9 >= 500 {print $7}' /var/log/nginx/access.log | sort | uniq -c | sort -rn | head -20
This version only counts requests that returned 5xx errors. Field $9 is the status code in the standard nginx log format.
If you use the shell history techniques from our previous post, you can recall and tweak these awk patterns fast. But there's an even faster way -- more on that later.
2. Process JSON at the Command Line with jq
The problem: Your monitoring API returns JSON, and you need to extract specific fields across hundreds of entries without writing a script.
curl -s https://api.example.com/v1/deployments | jq -r '.deployments[] | select(.status == "failed") | "\(.created_at) \(.service) \(.error)"'
What it does:
curl -s-- Fetches the JSON silently (no progress bar)jq -r-- Parses JSON and outputs raw strings (no quotes).deployments[]-- Iterates over each item in the deployments arrayselect(.status == "failed")-- Filters to only failed deployments"\(.created_at) \(.service) \(.error)"-- Formats a clean output string
Real-world output:
2026-03-28T14:22:01Z payments-api OOMKilled
2026-03-28T15:03:44Z auth-service ImagePullBackOff
2026-03-29T01:17:33Z worker-queue ConnectionRefused
Three failed deploys, instantly visible, no dashboard required.
Level it up: Pipe the output to further shell tools for analysis:
curl -s https://api.example.com/v1/deployments \
| jq -r '.deployments[] | select(.status == "failed") | .service' \
| sort | uniq -c | sort -rn
This tells you which services fail most often. Classic jq + sort + uniq pipeline -- the DevOps Swiss Army knife.
Tired of memorizing
jqsyntax? RewriteCmd generates complexjqfilters from plain English. Just describe what you want: "show me failed deployments with timestamp and error" -- and get the exact command. Try it free
3. Bulk File Operations with find and xargs
The problem: You need to find all .env files across a project tree and check if any contain hardcoded secrets.
find . -name "*.env" -type f | xargs grep -l "API_KEY\|SECRET\|PASSWORD" 2>/dev/null
What it does:
find . -name "*.env" -type f-- Recursively finds all files matching*.envxargs grep -l-- Passes those filenames togrep, which prints only filenames containing matches"API_KEY\|SECRET\|PASSWORD"-- Matches any of these patterns (basic regex alternation)2>/dev/null-- Suppresses permission errors for directories you can't read
Real-world output:
./services/payments/.env
./services/auth/.env.production
./deploy/staging/.env
Now you know exactly which files to audit before that next PR review.
Level it up: Find and delete stale log files older than 30 days in one shot:
find /var/log/app -name "*.log" -mtime +30 -type f -exec rm -v {} +
Or find large files eating your disk:
find / -type f -size +100M -exec ls -lh {} + 2>/dev/null | sort -k5 -h -r | head -20
The find + xargs / find + -exec pattern is one of the most versatile tools in a DevOps engineer's belt. Once you learn the syntax, you'll use it daily.
4. Network Debugging One-Liners
The problem: A service can't reach its dependency and you need to figure out if it's a DNS issue, a firewall rule, or the service itself.
for host in db.internal cache.internal api.internal; do printf "%-20s " "$host"; curl -so /dev/null -w "%{http_code} %{time_total}s" "http://$host/health" 2>/dev/null || echo "UNREACHABLE"; done
What it does:
- Loops through a list of internal hostnames
curl -so /dev/null-- Silently makes a request, discards the body-w "%{http_code} %{time_total}s"-- Prints the HTTP status code and total request time|| echo "UNREACHABLE"-- If curl fails entirely (DNS failure, connection refused), prints a fallback
Real-world output:
db.internal 200 0.023s
cache.internal UNREACHABLE
api.internal 503 2.104s
Instantly clear: cache is unreachable (likely DNS or firewall), and the API is slow and returning 503.
Level it up: Check which ports are open on a host without installing nmap:
for port in 22 80 443 5432 6379 8080; do (echo >/dev/tcp/target-host/$port) 2>/dev/null && echo "$port open" || echo "$port closed"; done
Or watch connections in real-time with ss:
ss -tnp | awk '{print $5}' | cut -d: -f1 | sort | uniq -c | sort -rn | head -10
This shows which remote IPs have the most active TCP connections to your machine -- useful for spotting connection leaks or unexpected traffic.
5. Kubernetes Quick Commands
The problem: Pods are crash-looping and you need to find which ones, why, and how many restarts they've accumulated -- across all namespaces.
kubectl get pods --all-namespaces --field-selector=status.phase!=Running -o custom-columns="NAMESPACE:.metadata.namespace,NAME:.metadata.name,STATUS:.status.phase,RESTARTS:.status.containerStatuses[0].restartCount" | sort -k4 -rn
What it does:
--all-namespaces-- Searches every namespace, not justdefault--field-selector=status.phase!=Running-- Filters out healthy pods-o custom-columns=...-- Formats output with exactly the fields you needsort -k4 -rn-- Sorts by restart count, highest first
Real-world output:
NAMESPACE NAME STATUS RESTARTS
production worker-queue-7f8d4b-x2k9q CrashLoop 847
staging auth-svc-5c4a1b-m8j3p Failed 12
monitoring prometheus-node-exp-v4k2l Pending 0
847 restarts on the worker queue -- that's your priority.
Level it up: Get the last 50 lines of logs from all crash-looping pods:
kubectl get pods -A --field-selector=status.phase=Failed -o name | xargs -I{} sh -c 'echo "=== {} ===" && kubectl logs {} --tail=50 2>/dev/null'
Or quickly exec into the most recently created pod of a deployment:
kubectl exec -it $(kubectl get pods -l app=myapp --sort-by=.metadata.creationTimestamp -o jsonpath='{.items[-1].metadata.name}') -- /bin/sh
The Pattern Behind the Power
Look at these five one-liners again. They all share the same structure: pipe small, composable tools together to solve big problems. That's the Unix philosophy at work -- and it's what makes the command line the fastest debugging tool you have.
But there's a catch. These commands are powerful when you can remember them. The jq filter syntax, the awk field numbers, the kubectl custom-columns format -- they're hard to recall under pressure.
That's exactly the problem RewriteCmd solves. Instead of memorizing syntax, you describe what you want in plain English:
- "Show me the top 20 URLs in the nginx log"
- "Find all env files containing secrets"
- "List crash-looping pods sorted by restart count"
RewriteCmd generates the exact command, ready to run. It's like having a senior DevOps engineer sitting next to you, fluent in every CLI tool you've ever needed.
If these terminal shortcuts and history tricks are the foundation, RewriteCmd is the shortcut that makes you dangerous.
curl -fsSL https://rewritecmd.com/install | sh
What's Your Go-To One-Liner?
Every DevOps engineer has that one command they've used a hundred times. What's yours? Share it in the comments or tweet at us -- we're always looking for real-world commands to feature in future posts.