15 min to read
Understanding Linux Standard Streams - stdin, stdout, and stderr
A comprehensive guide to Linux standard streams and redirection

Overview
Linux standard streams (stdin, stdout, and stderr) are fundamental channels for managing input and output in terminal or command-line interfaces.
Standard streams are a key concept in Unix/Linux operating systems, providing a unified way for programs to receive input and send output without needing to know the source or destination. This abstraction is one of the powerful features that enables the Unix philosophy of creating simple, modular tools that can be combined to perform complex operations.
The concept of standard streams originated with Unix in the early 1970s. Ken Thompson and Dennis Ritchie designed this I/O model to provide a consistent interface for program interaction. This design decision has influenced virtually all operating systems and programming languages since then.
The file descriptor numbers (0, 1, and 2) were assigned to these streams as they were the first files to be opened by a process under Unix, establishing a convention that has persisted for over 50 years.
The Three Standard Streams
Standard Input (stdin)
Standard Input is the primary channel through which a program receives data.
File Descriptor: 0
Default Source: Keyboard
Purpose: Receives input for commands
In the terminal, stdin typically connects to the keyboard, allowing users to type input directly to programs. However, this stream can be redirected to come from files or the output of other programs.
Basic Examples
# Basic stdin usage - waits for keyboard input
cat # Press Ctrl+D to signal end of input (EOF)
# Redirect file to stdin
cat < file.txt
# Here document (multi-line input)
cat << EOF
This is line one
This is line two
This is the final line
EOF
Advanced stdin Usage
# Using here-string
grep "pattern" <<< "Text to search in"
# Multiple input files
cat < file1.txt < file2.txt # Only reads file2.txt (last redirection wins)
# Process substitution
diff <(ls dir1) <(ls dir2) # Compare directory listings
Standard Output (stdout)
Standard Output is the default channel for normal program output.
File Descriptor: 1
Default Destination: Terminal screen
Purpose: Displays command output
When a program runs successfully and generates output, that output is sent to stdout by default, which typically displays on the terminal.
Basic Examples
# Basic stdout usage
echo "Hello, World!"
# Redirect stdout to file (overwrites existing content)
echo "Hello, World!" > output.txt
# Append stdout to file
echo "Another line" >> output.txt
Advanced stdout Usage
# Redirect to /dev/null (discard output)
ls -la > /dev/null
# Redirect to multiple files using tee
echo "Same content" | tee file1.txt file2.txt
# Using process substitution
tee >(sort > sorted.txt) >(wc -l > count.txt) < data.txt
Standard Error (stderr)
Standard Error is a separate output channel specifically for error messages and diagnostics.
File Descriptor: 2
Default Destination: Terminal screen
Purpose: Displays error messages and warnings
This separation of normal output and error messages is a powerful feature that allows for more flexible handling of program behavior, especially in scripts and automated processes.
Basic Examples
# Generate stderr output
ls /nonexistent-directory
# Redirect stderr to file
ls /nonexistent-directory 2> error.log
# Append stderr to file
ls /nonexistent-directory 2>> error.log
Advanced stderr Usage
# Redirect stderr to stdout
ls /nonexistent-directory 2>&1
# Redirect only stderr to /dev/null (suppress errors)
ls /nonexistent-directory /etc 2>/dev/null
# Redirect stdout and stderr to different files
ls /etc /nonexistent-directory > output.log 2> error.log
Advanced Stream Operations
Combining and Redirecting Streams
Combining and redirecting streams allows for powerful control over program input and output.
Redirecting Both stdout and stderr
# Both to same file (bash 4+)
ls /nonexistent-directory /etc &> all_output.log
# Traditional syntax for both to same file
ls /nonexistent-directory /etc > all_output.log 2>&1
# Both to same file (append mode)
ls /nonexistent-directory /etc >> all_output.log 2>&1
# Separate files
./myscript.sh > output.log 2> error.log
Swapping Streams
# Swap stdout and stderr
./myscript.sh 3>&2 2>&1 1>&3 | grep "error"
# Explanation:
# 3>&2 : Create a new FD 3 and point it to where stderr (2) points
# 2>&1 : Redirect stderr to stdout
# 1>&3 : Redirect stdout to FD 3 (which has original stderr)
# Result: stdout and stderr are swapped
Using tee for Multiple Destinations
The tee
command is like a T-junction in plumbing – it sends output to both a file and stdout.
# Send output to both screen and file
ls /etc | tee output.log
# Send output to screen and multiple files
ls /etc | tee file1.log file2.log
# Append to files instead of overwriting
ls /etc | tee -a logfile.txt
# Combine with stderr redirection
./script.sh 2>&1 | tee full_log.txt
Shell Redirection Operators
Operation | Description |
---|---|
> file |
Redirects stdout to a file (overwrites if exists) |
2> file |
Redirects stderr to a file |
>> file |
Appends stdout to a file |
2>> file |
Appends stderr to a file |
< file |
Uses a file as stdin input |
2>&1 |
Redirects stderr to stdout |
1>&2 |
Redirects stdout to stderr |
&> file |
Redirects both stdout and stderr to a file (Bash 4+) |
&>> file |
Appends both stdout and stderr to a file (Bash 4+) |
| |
Pipes stdout from one command to stdin of another |
|& |
Pipes both stdout and stderr to stdin of another command (Bash 4+) |
<<< "string" |
Here-string: Sends string to stdin |
<< EOF ... EOF |
Here-document: Multi-line string to stdin |
n>&m |
Redirects file descriptor n to file descriptor m |
n<&m |
Duplicates stdin from file descriptor m to file descriptor n |
The Pipe Mechanism
The pipe operator (|
) is one of the most powerful features in Unix/Linux systems, allowing commands to be chained together by connecting the stdout of one command to the stdin of another.
The pipe mechanism is a cornerstone of the Unix philosophy: "Write programs that do one thing and do it well. Write programs to work together." It enables complex operations through the composition of simple tools.
When Doug McIlroy, the inventor of Unix pipes, described his vision, he said: "We should have some ways of connecting programs like a garden hose — screw in another segment when it becomes necessary to massage data in another way."
Basic Pipe Examples
# Count files in a directory
ls | wc -l
# Find the largest files
du -h /etc | sort -hr | head -10
# Count occurrences of a pattern
cat /var/log/syslog | grep "error" | wc -l
Advanced Pipe Examples
# Multiple transformations
cat data.txt | grep "important" | sort | uniq -c | sort -nr
# Named pipes (FIFOs)
mkfifo mypipe
ls -la > mypipe &
cat mypipe
# Process substitution with pipes
diff <(ls -la /dir1) <(ls -la /dir2 | grep -v "temp")
Redirection Order Matters
The order of redirection operations can affect their behavior:
# This works (stderr goes to error.log, stdout goes to the pipe)
./script.sh 2> error.log | grep "important"
# This doesn't work as expected - both stdout and stderr go to error.log
./script.sh > error.log 2>&1 | grep "important"
# The correct way to redirect stderr and pipe stdout
./script.sh 2> error.log | grep "important"
# To pipe both stdout and stderr
./script.sh 2>&1 | grep "important"
Practical Use Cases
Log Management
# Save both output and errors to separate log files
./backup_script.sh > backup.log 2> backup_error.log
# Create timestamped log entries
echo "$(date) - Backup started" >> backup_history.log
# Capture command output in variables
output=$(ls -la 2>/dev/null)
Error Handling in Scripts
#!/bin/bash
# Redirect errors to a function
exec 2> >(error_handler)
error_handler() {
echo "Error occurred at $(date)" >> error.log
while read line; do
echo " $line" >> error.log
done
}
# Rest of script follows
ls /nonexistent
echo "Script continues..."
Data Processing Pipelines
# Extract, filter, and format data
cat server.log |
grep "ERROR" |
cut -d' ' -f3- |
sed 's/^\[.*\] //' |
sort |
uniq -c |
sort -nr > error_summary.txt
Custom File Descriptors
#!/bin/bash
# Create custom file descriptors
exec 3> output.log
exec 4> debug.log
# Use them in the script
echo "Regular output" >&3
echo "Debug information" >&4
# Close them when done
exec 3>&-
exec 4>&-
File Descriptors in Detail
File descriptors are numeric handles that represent open files or I/O channels within a process.
Standard File Descriptors
- 0 (stdin): Standard input - read operations
- 1 (stdout): Standard output - write operations for normal output
- 2 (stderr): Standard error - write operations for error messages
Creating and Using Custom File Descriptors
# Open a file descriptor for writing
exec 3> custom.log
# Write to the descriptor
echo "Custom log entry" >&3
# Redirect stdout to the custom descriptor
echo "This goes to custom.log" 1>&3
# Close the descriptor
exec 3>&-
# Open descriptors for both reading and writing
exec 4<> data.txt
# Read from descriptor
read line <&4
echo "Read: $line"
# Write to descriptor
echo "New data" >&4
# Close descriptor
exec 4>&-
Each process has a limit on the number of file descriptors it can open. The traditional limit was 1024, but modern systems often allow much higher limits.
You can check the current limits with:
ulimit -n
For system-wide limits, check:
cat /proc/sys/fs/file-max
Common Patterns and Best Practices
Silent Operations
# Discard all output
command > /dev/null 2>&1
# Discard only errors
command 2> /dev/null
Logging with Timestamps
# Save output with timestamps
command 2>&1 | while read line; do
echo "$(date '+%Y-%m-%d %H:%M:%S') $line" >> logfile.txt
done
Debugging Scripts
# Enable debug mode with redirected output
bash -x script.sh 2> debug.log
# Inside a script, enable debugging for a section
set -x # Start debugging
critical_operations
set +x # Stop debugging
Script Progress Updates
# Display progress on stderr while stdout goes to a file
echo "Starting backup..." >&2
backup_command > backup.log
echo "Backup completed!" >&2
Stream-Related Commands
Key Commands for Stream Manipulation
Command | Description | Example |
---|---|---|
tee |
Read from stdin and write to stdout and files | ls | tee output.txt |
xargs |
Build and execute commands from stdin | find . -name "*.txt" | xargs grep "pattern" |
mkfifo |
Create a named pipe (FIFO special file) | mkfifo mypipe; cat mypipe |
exec |
Replace current process or manipulate file descriptors | exec > logfile.txt |
stdbuf |
Run command with modified buffering operations | stdbuf -oL command | grep "pattern" |
pv |
Monitor the progress of data through a pipe | cat largefile | pv | grep "pattern" > output |
script |
Record terminal session | script -q -c "ls -la" output.txt |
Stream Buffering
Understanding stream buffering is crucial when working with complex pipelines or real-time data processing.
Buffering Types
- Unbuffered: Data is processed immediately (character by character)
- Line buffered: Data is processed when a newline is encountered
- Fully buffered: Data is processed when the buffer is full
Controlling Buffering
# Unbuffered output from grep
grep --line-buffered "pattern" file.log | next_command
# Using stdbuf to control buffering
stdbuf -oL command | grep "pattern" # Line buffered output
stdbuf -o0 command | grep "pattern" # Unbuffered output
# Using python with different buffering
python -u script.py | grep "pattern" # Unbuffered
Key Points
-
Standard Streams
- stdin (0): Input stream
- stdout (1): Output stream
- stderr (2): Error stream -
Default Behavior
- stdin: Reads from keyboard
- stdout: Writes to screen
- stderr: Writes to screen -
Benefits
- Flexible input/output control
- Error handling separation
- Script automation support
- Enables command composition
Troubleshooting Common Issues
Order of Redirections
# Incorrect (stderr still goes to terminal)
command > output.log 2> error.log | another_command
# Correct (both stdout and stderr are properly handled)
{ command > output.log 2> error.log; } | another_command
Redirection in Functions
# Function with local redirection
function process() {
local output
# Use subshell to contain redirection
output=$(command 2>/dev/null)
echo "$output"
}
Common Errors
- Permission denied: Attempting to write to a file without proper permissions
- No space left on device: Target filesystem is full
- Too many open files: Reached file descriptor limit (use
ulimit -n
to check) - Bad file descriptor: Attempting to use a closed or invalid file descriptor
Comments