How can I iterate this process?

Here's one way:

SeedRandom[1];
a = RandomVariate[UniformDistribution[{0, 1}]];
b = RandomVariate[UniformDistribution[{0, 1}]];
c = RandomVariate[UniformDistribution[{0, 1}]];
a + a b + a b c

0.980367

SeedRandom[1];
values = RandomVariate[UniformDistribution[{0, 1}], 3];
Total@FoldList[Times, values]

0.980367

The number 3 can be replaced by any number, however many times you want to iterate.

Here's a procedural solution (with the definition of values as in the previous example):

prod = First[values];
sum = First[values];
Do[
  prod *= v;
  sum += prod,
  {v, Rest[values]}
  ];
sum

0.980367


C.E.'s answer is great already. I would just like to point out that we may exploit here that floating point addition is usually significantly faster than floating point multiplication that FoldList is just slow, and that multiplication can be cast into addition by applying Log so that we can use Accumulate instead. Morever, we may use vectorized built-in routines for that.

n = 1000000;
values = RandomVariate[UniformDistribution[{0, 1}], n];

r1 = Total@FoldList[Times, values]; // RepeatedTiming // First
r2 = Total[Exp[Clip[Accumulate[Log[values]], {-700., ∞}]]]; // RepeatedTiming // First

Max[Abs[r1 - r2]]

0.070

0.0053

0.

For those who wonder what the Clip is for: This is in order to prevent underflow error handling to occurr (the latter slows down things considerably); that happens at about Exp[-709.] or so.

Edit

It is even faster to write a short compiled version of C.E.'s procedure (if do not count in the compilation time):

cf = Compile[{{x, _Real, 1}},
   Block[{prod = 1., sum = 0.},
    Do[prod *= Compile`GetElement[x, i]; sum += r, {i, 1, Length[x]}];
    sum
    ],
   CompilationTarget -> "C"
   ];

Now:

r3 = cf[values]; // RepeatedTiming // First
Max[Abs[r1 - r3]]

0.0013

1.77636*10^-15

Remark

I formerly claimed that floating point multiplication were slower than floating point addition. As Roman pointed out, that is not correct. While multiplication probably has higher complexity (and with floating point computations, some quite counterintuitive things happen), modern hardware is built such that variuous steps of the multiplication are performed in parallel. Nowadays, there is even a single circuit for fused multiply-add (FMA) and not necessarily any separated addition circuit, so addition and multiplication should take basically the same time.