Why write an entire bash script in functions?

Readability is one thing. But there is more to modularisation than just this. (Semi-modularisation is maybe more correct for functions.)

In functions you can keep some variables local, which increases reliability, decreasing the chance of things getting messed up.

Another pro of functions is re-usability. Once a function is coded, it can be applied multiple times in the script. You can also port it to another script.

Your code now may be linear, but in the future you may enter the realm of multi-threading, or multi-processing in the Bash world. Once you learn to do things in functions, you will be well equipped for the step into the parallel.

One more point to add. As Etsitpab Nioliv notices in the comment below, it's easy to redirect from functions as a coherent entity. But there's one more aspect of redirections with functions. Namely, the redirections can be set along the function definition. Eg.:

f () { echo something; } > log

Now no explicit redirections are needed by the function calls.

$ f

This may spare many repetitions, which again increases reliability and helps keeping things in order.

See also

  • https://unix.stackexchange.com/a/483304/181255

I've started using this same style of bash programming after reading Kfir Lavi's blog post "Defensive Bash Programming". He gives quite a few good reasons, but personally I find these the most important:

  • procedures become descriptive: it's much easier to figure out what a particular part of code is supposed to do. Instead of wall of code, you see "Oh, the find_log_errors function reads that log file for errors ". Compare it with finding whole lot of awk/grep/sed lines that use god knows what type of regex in the middle of a lengthy script - you've no idea what's it doing there unless there's comments.

  • you can debug functions by enclosing into set -x and set +x. Once you know the rest of the code works alright , you can use this trick to focus on debugging only that specific function. Sure, you can enclose parts of script, but what if it's a lengthy portion ? It's easier to do something like this:

     set -x
     parse_process_list
     set +x
    
  • printing usage with cat <<- EOF . . . EOF. I've used it quite a few times to make my code much more professional. In addition, parse_args() with getopts function is quite convenient. Again, this helps with readability, instead of shoving everything into script as giant wall of text. It's also convenient to reuse these.

And obviously, this is much more readable for someone who knows C or Java, or Vala, but has limited bash experience. As far as efficiency goes, there's not a lot of what you can do - bash itself isn't the most efficient language and people prefer perl and python when it comes to speed and efficiency. However, you can nice a function:

nice -10 resource_hungry_function

Compared to calling nice on each and every line of code, this decreases whole lot of typing AND can be conveniently used when you want only a part of your script to run with lower priority.

Running functions in background, in my opinion, also helps when you want to have whole bunch of statements to run in background.

Some of the examples where I've used this style:

  • https://askubuntu.com/a/758339/295286
  • https://askubuntu.com/a/788654/295286
  • https://github.com/SergKolo/sergrep/blob/master/chgreeterbg.sh

In my comment, I mentioned three advantages of functions:

  1. They are easier to test and verify correctness.

  2. Functions can be easily reused (sourced) in future scripts

  3. Your boss likes them.

And, never underestimate the importance of number 3.

I would like to address one more issue:

... so being able to arbitrarily swap the run order isn't something we would generally be doing. For example, you wouldn't suddenly want to put declare_variables after walk_into_bar, that would break things.

To get the benefit of breaking code into functions, one should try to make the functions as independent as possible. If walk_into_bar requires a variable that is not used elsewhere, then that variable should be defined in and made local to walk_into_bar. The process of separating the code into functions and minimizing their inter-dependencies should make the code clearer and simpler.

Ideally, functions should be easy to test individually. If, because of interactions, they are not easy to test, then that is a sign that they might benefit from refactoring.