Left to right application of operations on a list in Python 3

The answer from @JohanL does a nice job of seeing what the closest equivalent is in standard python libraries.

I ended up adapting a gist from Matt Hagy in November 2019 that is now in pypi

https://pypi.org/project/infixpy/

from infixpy import *
a = (Seq(range(1,51))
     .map(lambda x: x * 4)
     .filter(lambda x: x <= 170)
     .filter(lambda x: len(str(x)) == 2)
     .filter( lambda x: x % 20 ==0)
     .enumerate() 
     .map(lambda x: 'Result[%d]=%s' %(x[0],x[1]))
     .mkstring(' .. '))
print(a)

  # Result[0]=20  .. Result[1]=40  .. Result[2]=60  .. Result[3]=80

Other approaches described in other answers

  • pyxtension https://stackoverflow.com/a/62585964/1056563

    from pyxtension.streams import stream

  • sspipe https://stackoverflow.com/a/56492324/1056563

    from sspipe import p, px

Older approaches

I found a more appealing toolkit in Fall 2018

https://github.com/dwt/fluent

enter image description here

After a fairly thorough review of the available third party libraries it seems the Pipe https://github.com/JulienPalard/Pipe best suits the needs .

You can create your own pipeline functions. I put it to work for wrangling some text shown below. the bolded line is where the work happens. All those @Pipe stuff I only have to code once and then can re-use.

The task here is to associate the abbreviation in the first text:

rawLabels="""Country: Name of country
Agr: Percentage employed in agriculture
Min: Percentage employed in mining
Man: Percentage employed in manufacturing
PS: Percentage employed in power supply industries
Con: Percentage employed in construction
SI: Percentage employed in service industries
Fin: Percentage employed in finance
SPS: Percentage employed in social and personal services
TC: Percentage employed in transport and communications"""

With an associated tag in this second text:

mylabs = "Country Agriculture Mining Manufacturing Power Construction Service Finance Social Transport"

Here's the one-time coding for the functional operations (reuse in subsequent pipelines):

@Pipe
def split(iterable, delim= ' '):
    for s in iterable: yield s.split(delim)

@Pipe
def trim(iterable):
    for s in iterable: yield s.strip()

@Pipe
def pzip(iterable,coll):
    for s in zip(list(iterable),coll): yield s

@Pipe
def slice(iterable, dim):
  if len(dim)==1:
    for x in iterable:
      yield x[dim[0]]
  elif len(dim)==2:
    for x in iterable:
      for y in x[dim[0]]:
        yield y[dim[1]]
    
@Pipe
def toMap(iterable):
  return dict(list(iterable))

And here's the big finale : all in one pipeline:

labels = (rawLabels.split('\n') 
     | trim 
     | split(':')
     | slice([0])
     | pzip(mylabs.split(' '))
     | toMap )

And the result:

print('labels=%s' % repr(labels))

labels={'PS': 'Power', 'Min': 'Mining', 'Country': 'Country', 'SPS': 'Social', 'TC': 'Transport', 'SI': 'Service', 'Con': 'Construction', 'Fin': 'Finance', 'Agr': 'Agriculture', 'Man': 'Manufacturing'}

Here is another solution using SSPipe library.

Note that all functions used here like map, filter, str, len, enumerate, str.format, str.join except p and px are builtin python functions and are you don't need to learn about new function names and API. The only thing you need is the p wrapper and px placeholder:

from sspipe import p, px
a = (
    range(1, 50+1)
    | p(map, px * 4)
    | p(filter, px <= 170)
    | p(filter, p(str) | p(len) | (px == 2))
    | p(filter, px % 20 == 0)
    | p(enumerate)
    | p(map, p('Result[{0[0]}]={0[1]}'.format)) 
    | p('  ..  '.join)
)
print(a)

Even though it is not considered Pythonic, Python still contains map and filter and reduce can be imported from functools. Using these functions it is possible to generate the same pipe line as the one you have in scala, albeit it will be written in the opposite direction (from right to left, rather than left to right):

from functools import reduce
a = reduce(lambda f,s: f'{f} .. {s}',
    map(lambda nx: f'Result[{nx[0]}]: {nx[1]}',
    enumerate(
    filter(lambda n: n%20 == 0,
    filter(lambda n: len(str(n)) == 2,
    filter(lambda n: n <= 170,
    map(lambda n: n*4,
    range(1,51))))))))

Now, this is lazy, in the sense that it will let each value be transported through the whole pipe before the next is being evaluated. However, since all values are consumed by the final reduce call, this is not seen.

It is possible to generate a list from each map or filter object in each step:

a = reduce(lambda f,s: f'{f} .. {s}',
    list(map(lambda nx: f'Result[{nx[0]}]: {nx[1]}',
    list(enumerate(
    list(filter(lambda n: n%20 == 0,
    list(filter(lambda n: len(str(n)) == 2,
    list(filter(lambda n: n <= 170,
    list(map(lambda n: n*4,
    list(range(1,51)))))))))))))))

Both of these expressions, especially the second one, are quite verbose, so I don't know if I would recommend them. I would recommend using list/generator comprehensions and a few intermediate varaiables:

n4 = [n*4 for n in range(1,51)]
fn4 = [n for n in n4 if n <= 170 if len(str(n))==2 if n%20 == 0]
rfn4 = [f'Result[{n}]: {x}' for n, x in enumerate(fn4)]
a = ' .. '.join(rfn4)

Another benefit with this approach (for you, at least) is that with this approach you will keep the order of opeations that is found in scala. It will also, as long as we do list comprehension (as shown) be non-lazy evaluated. If we want lazy evaluation, it is possible to do generator comprehension instead:

n4 = (n*4 for n in range(1,51))
fn4 = (n for n in n4 if n <= 170 if len(str(n))==2 if n%20 == 0)
rfn4 = (f'Result[{n}]: {x}' for n, x in enumerate(fn4))
a = ' .. '.join(rfn4)

Thus, the only difference is that we use parantheses instead of brackets. But, as stated before; since all data is consumed, the difference in this example is rather minimal.