Augmented functions
Pick a function, any function... Should you be able to apply it to arguments it wasn't quite designed for? That is to say, if you had a function with arguments of type A, B and C, should it be applicable to those of type T[A, B, C]? You might call this "type constructor polymorphism".
Some languages have a limited version of this: vectorization. In R and MATLAB for instance, you would expect to be able to extend a simple one-argument function to work with arrays. But there are a lot more interesting options than just T = Vector. This is where augmented functions come in: take a regular function, and extend it to work with arguments that are derived from its original ones, as specified by T. For each value of T there's a different function extension, that calls upon upon one or more higher-order functions. For vectorization, this is effectively `map`.
It turns out that these augmented functions, which are functions "with adapters included", can simplify a lot of plumbing issues. From direct style, to "adaptive" type signatures, to practical matters such as... using ZIO in Java.
Let's take direct style: a key motivation is to avoid the awkwardness involved in something as simple as adding two futures. This is pointed out here: if f1 and f2 are both say of type Future[Int], then the usual song and dance is
val sum = for x <- f1 y <- f2 yield x + yBut with augmented functions this becomes simply:
val sum = f1 + f2
Here addition has been extended to handle monadic values, and what's more, so has every other function (with a reasonable number of arguments). Just as you can have auto-vectorization, here you have "auto-augmentation", thanks to Scala's remarkable facility to affect the behavior of all - yes all - functions through type classes, since they're also objects. Probably not something you want to abuse... but it's just what's required in this case. The only fly in the ointment is that by functions I mean "true functions", whereas most of the time you're actually dealing with methods, which can be trivially converted. (If you spent a long time ignoring the difference between the two, you're not alone.)
This lets you use a style you probably wanted to use anyway. In one of his handy ZIO tutorials, Alvin Alexander describes adding ZIO values directly (a + b + c) as "just pseudo-code", but it can now be used as actual code, without any changes. You can work with all those boxed integers (via Option, Future, Either, cats.effect.IO, etc.) as if they were unboxed, and you can even mix the two (e.g. Option(4) + 3). It used to be said that "We can't add one to an IO[Int], that doesn't make sense."
Well, now it does.
It works because there's a family of values of T that correspond to all these cases. This includes for instance
T[A, B, C] = (F[A], F[B], F[C])where you might then have F = Option.
There are also values of T that correspond to monadic chains, such as
T[A, B, C] = (F[A], A => F[B], (A, B) => F[C])The upshot of it is that the key information from a tangled nest of flatMaps is passed simply as a list of arguments to the extended function: unsurprisingly this will include functions of type A => F[B]. The extra fluff is all gone: no more explicit references to higher-order functions, and no more nesting. It's also "purer" than the equivalent for-comprehension, which uses its own syntax. It represents the the bare-bones version of a monadic chain, that can later be "reinflated" to a flatMap thicket or a comprehension.
Purity is one thing, but using standard lambda-function syntax has practical benefits, because it's universal on the JVM. This means that you can often take a monadic chain represented this way in Scala, and copy-paste it into Java (with the trivial change of switching lambda arrows from => to ->). Voila: ZIO in Java. To illustrate this, I've converted a couple of Alvin's examples.
So is that what this is mainly about, a compact notation that can function as "a better for", or to use the stock phrase, "syntax sugar"? Actually there's a bit more to it than that.
To each value of T, there corresponds extended function behavior. Broadly speaking, the library maps T values to higher-order functions. In this respect comprehensions are in fact fairly limited: everything happens within the same container, and anything extra requires some contortions. These limitations do not apply to values of T, so there's a lot more flexibility: arguments can be either monadic or plain (they will then be "lifted"), and might even be such that a monad transformer is required. But now the details of how to apply that can be left to the library, which uses the Cats EitherT page so you don't have to.
Augmented functions are not so much a type of notation as they are a particular type of function: one which includes the use of higher-order functions as part of its definition. Their domains are cobbled together from the initial domain and every other addition corresponding to a different value of T. In case you're thinking, what's the difference between that and just applying higher-order functions to the original vanilla function, it's this: in one case you have to choose and apply the right HOF yourself, and in the other the type system does it for you. (If all the functions in a call chain are augmented, the type system does something else: it adapts their effective type signatures, even as the nominal ones stay the same.)
They tend to erase the distinction between plain and monadic values, by providing a different answer to the question "What color is your function?", namely: all of them. If every function can handle optional or either-ish values smoothly, the type system's insistence that "you know, this number can also be an ArithmeticException" can be met with a shrug. The distinction could become nearly as unremarkable as the one between Java's plain and boxed integers.
It's as if you were buying an appliance that already included every adapter in the box: wasteful and cumbersome in the real world, but free and unobtrusive here. It means you can use higher-order functions without even remembering their names... the right one will be called once the value of T is recognized: filter, map, flatMap, fold, traverse, monad transformers, etc. This seems like true (higher-order) type-driven development...
Wasn't that half the argument against using monads? The strange function names, the awkwardness of things like monad transformers... if it's all done in the background, does any of that still matter? Perhaps not, after all.