Why can't we usually speak of partial derivatives if the domain is not open?

The definition of the $i$th partial derivative is really as follows: at a point $x = (x_1,\dots,x_n)$, we say $$ \left.\frac{\partial f}{\partial x_i}\right|_{x} = \lim_{t \to 0} \frac{f(x_1,\dots,x_{i-1},x_i+t,x_{i+1},\dots,x_n) - f(x_1,\dots,x_{i-1},x_i,x_{i+1},\dots,x_n)}{t} $$

However, in order for this limit to make sense, $f$ needs to be defined over $x + t(0,\dots,0,1,0,\dots,0)$ for $t$ such that $|t|$ is small enough. That is, $x + t(0,\dots,0,1,0,\dots,0)$ needs to be in $U$ for $t$ close enough to $0$.

One way to guarantee that this definition will make sense for every point $x \in U$ is to say that for each $x$, there is some $t$ so that the "cube" $$ (x_1-t,x_1+t) \times \cdots \times (x_n - t, x_n + t) $$ is contained in $U$. That is, as long as $U$ is open, we can talk about partial derivatives at every point.


The same happens with usual derivatives. If you work on a closed interval, say $[-1,1],$ then the function $f(x)=\sqrt{1-x^2}$ is $C^{\infty}$ in $(-1,1).$ However, it is not $\require{cancel} \cancel{\textrm{derivable}}$ differentiable at $x=-1,x=1.$

So, if you want to have derivatives (partial derivatives in higher dimensions) you have to consider the function defined in an larger open set, say $(-1-\epsilon,1+\epsilon)$ for some $\epsilon>0.$

From the example above you can consider $f(x_1,\cdots,x_n)=\sqrt{1-x_1^2-\cdots -x_n^2}.$ This refers only to the function, but can happen to some of the (partial) derivatives at every (some) points of the boundary.


Actually, under some favorable conditions one do it for functions defined on subsets which are not open (and it is even useful).

Let $A$ be a subset of $R^n$ and $f: A\to R$ be a function such that there exists an open subset $B$ of $R^n$ (containing $A$) such that $f$ extends to a differentiable function $F: B\to R$. Then to define partial derivatives of $f$ on $A$ you just compute partial derivatives $D_iF$ and then restrict $D_iF$ back to $A$.

For instance, this appears when you integrate vector fields/differential forms over curves and surfaces (especially when using Green's and Stokes' theorems). Also, when talking about von Neumann boundary value problem, one talks about normal derivative of a function on the boundary of a domain in $R^n$.

One drawback of this definition is that at some points $a\in A$ the partials thus defined $D_if(a)$ might depend on the extension $F$. (For the von Neumann BVP this is not an issue since the boundary is typically assumed to be smooth and, hence, the normal derivative is independent of the extension.) For instance, if $A$ is the $x$-axis in the plane then $D_xf(a)$, $a\in A$, is not well-defined. Same for the subset $A$ which is the cuspidal curve $y^2=x^3$. This is related to a bounty question which was asked at MSE recently (maybe I will find it).