Tuesday, September 5, 2017

Linux: common commands

To find and count duplicate lines in multiple files, you can try the following command:
sort <files> | uniq -c | sort -nr
cat <files> | sort | uniq -c | sort -nr


My new favorite parameter combo is tail -qF *.log-q to hide the file names and -F, as Arcege pointed out, to let tail follow the name rather than the descriptor because my log files are being rotated


md >>file.txt 2>&1
Bash executes the redirects from left to right as follows:

  1. >>file.txt: Open file.txt in append mode and redirect stdout there.
  2. 2>&1: Redirect stderr to "where stdout is currently going". In this case, that is a file opened in append mode. In other words, the &1 reuses the file descriptor which stdoutcurrently uses.

Use wget to Recursively Download all Files of a Type, like jpg, mp3, pdf or others

If you need to download from a site all files of an specific type, you can use wget to do it.
Let's say you want to download all images files with jpg extension.
wget -r -A .jpg http://site.with.images/url/
Now if you need to download all mp3 music files, just change the above command to this:
wget -r -A .mp3 http://site.with.music/url/
The same can be applied to any other type of file. Movies, music, images, and others.
Be respectful with owner's rights and with the bandwidth of the site.


  1. https://unix.stackexchange.com/questions/37329/efficiently-delete-large-directory-containing-thousands-of-files

    Using rsync is surprising fast and simple.

    mkdir empty_dir
    rsync -a --delete empty_dir/ yourdirectory/

  2. I use the following command to remove all of those annoying Apple files, but this one also does it recursively through all sub-directories, too:

    # find . -iname '._*' -exec rm -rf {} \;