Definite integral over $(0,1)$ rather than $[0,1]$

When integrating, the domain of $(0,1)$ or $[0,1]$ is inconsequential. The reason being is because the integral of a point is zero. You can test this by integrating from $a$ to $a$.


Changing the value of an integrable function at a finite number of points will have no effect on the ultimate value of the definite integral, and below I will explain why.

Provided a function $f$ is integrable on an interval $[a, b]$, the definite integral is rigorously defined as follows: there is a unique $I$ such that, for any given partition $\mathcal{P}$ of an interval $[a, b]$, we have:

$$L(f, \mathcal{P}) \leq I = \int_a^b f(x) \ dx \leq U(f, \mathcal{P})$$

Where $\displaystyle L(f, \mathcal{P}) = \sum_{i} (x_{i+1} - x_i)\inf \Big( \{f(x) \ | \ x \in [x_i, x_{i+1}] \} \Big)$ where $x_i$'s $\in \mathcal{P}$

and likewise $\displaystyle U(f, \mathcal{P}) = \sum_i (x_{i+1} - x_i)\sup \Big( \{ f(x) \ | \ x \in [x_i, x_{i+1}] \} \Big)$

And so with in mind, it's possible to show that changing the value of an integrable function at a single point has no effect on the value of the definite integral. Basically, the idea is that you take your original partitions and refine them to enclose the points in question within intervals of arbitrarily small width, limiting the relevant terms to zero in the $L(f, \mathcal{P})$ and $U(f, \mathcal{P})$ summations.

For further discussion, see Chapter 6 of Walter Rudin's Principles of Mathematical Analysis (PDFs are freely available online).