Multivariable calculus - Implicit function theorem

The implicit function theorem: Let $m,n$ be natural numbers, $\Omega$ an open subset of $\mathbb R^{n+m}$, $F\colon \Omega\to \mathbb R^m$ a class $C^1$ function and $(a_1, \ldots ,a_n, b_1, \ldots ,b_m)\in \Omega$ such that $$F(a_1, \ldots ,a_n, b_1, \ldots ,b_m)=0_{\mathbb R^{\large m}}.$$ Writing $F=(f_1, \ldots, f_m)$ where for each $k\in \{1, \ldots m\}$, $f_k\colon \mathbb R^{n+m}\to \mathbb R$ is a class $C^1$ function, assume that the following $m\times m$ matrix is invertible: $$\begin{pmatrix} \dfrac{\partial f_1}{\partial y_1} & \cdots & \dfrac{\partial f_1}{\partial y_m}\\ \vdots & \ddots & \vdots\\ \dfrac{\partial f_m}{\partial y_1} & \cdots& \dfrac{\partial f_m}{\partial y_m}\end{pmatrix}(a_1, \ldots ,a_n, b_1, \ldots ,b_m).$$

In these conditions there exists a neighborhood $V$ of $(a_1, \ldots ,a_n)$, a neighborhood $W$ of $(b_1, \ldots ,b_m)$ and a class $C^1$ function $G\colon V\to W$ such that:

  • $G(a_1, \ldots ,a_n)=(b_1, \ldots ,b_m)$ and
  • $\forall (x_1, \ldots ,x_n)\in V\left(F(x_1, \ldots , x_n, g_1(x_1, \ldots , x_n), \ldots ,g_m(x_1, \ldots , x_n))=0_{\mathbb R^{\large m}}\right)$, where for each $l\in \{1, \ldots , m\}$, $g_l\colon \mathbb R^n \to \mathbb R$ is a class $C^1$ function and $G=(g_1, \ldots ,g_m)$.

Furthermore, $J_G=-\left(J_2\right)^{-1}J_1$ where $$J_G \text{ is } \begin{pmatrix}\dfrac {\partial g_1}{\partial x_1} & \cdots & \dfrac {\partial g_1}{\partial x_n}\\ \vdots &\ddots &\vdots\\ \dfrac {\partial g_m}{\partial x_1} & \cdots & \dfrac {\partial g_m}{\partial x_n} \end{pmatrix}_{m\times n}\\ \text{ evaluated at }(x_1, \ldots ,x_n), \\~\\ J_2\text{ is }\begin{pmatrix} \dfrac{\partial f_1}{\partial y_1} & \cdots & \dfrac{\partial f_1}{\partial y_m}\\ \vdots & \ddots & \vdots\\ \dfrac{\partial f_m}{\partial y_1} & \cdots& \dfrac{\partial f_m}{\partial y_m}\end{pmatrix}_{m\times m}\\ \text{ evaluated at }(x_1, \ldots , x_n, g_1(x_1, \ldots , x_n), \ldots ,g_m(x_1, \ldots , x_n)),$$ and $$J_1\text{ is }\begin{pmatrix} \dfrac{\partial f_1}{\partial x_1} & \cdots & \dfrac{\partial f_1}{\partial x_n}\\ \vdots & \ddots & \vdots\\ \dfrac{\partial f_m}{\partial x_1} & \cdots& \dfrac{\partial f_m}{\partial x_n}\end{pmatrix}_{m\times n}\\ \text{ evaluated at }(x_1, \ldots , x_n, g_1(x_1, \ldots , x_n), \ldots ,g_m(x_1, \ldots , x_n)).$$


In this problem we can't apply the IFT as it is, because to use this version of the IFT one writes the last $m$ variables as functions of the first $n$ ones, but looking at the proof one notices that we can just consider permutations of this and this is what happens here.

In the notation above one has $n=1, m=2, \Omega =\mathbb R^{n+m}, F\colon \mathbb R^{n+m}\to \mathbb R^m$ given by $F(x,y,z)=(f_1(x,y,z), f_2(x,y,z))$, where $f_1(x,y,z)=x+yz-z^3$ and $f_2(x,y,z)=x^3-xz+y^3$.

For all $(x,y,z)\in \mathbb R^3$ it holds that:

  • $\dfrac {\partial f_1}{\partial x}(x,y,z)=1,$
  • $\dfrac {\partial f_1}{\partial y}(x,y,z)=z,$
  • $\dfrac {\partial f_2}{\partial x}(x,y,z)=3x^2-z,$ and
  • $\dfrac {\partial f_2}{\partial y}(x,y,z)=3y^2$.

Therefore $\begin{pmatrix} \dfrac {\partial f_1}{\partial x}(1,-1, 0) & \dfrac {\partial f_1}{\partial y}(1, -1, 0)\\ \dfrac {\partial f_2}{\partial x}(1, -1, 0) & \dfrac {\partial f_2}{\partial y}(1, -1, 0)\end{pmatrix}=\begin{pmatrix} 1 & 0\\ 3 & 3\end{pmatrix}$ and the matrix $\begin{pmatrix} 1 & 0\\ 3 & 3\end{pmatrix}$ is invertible.

So, by the IFT, there exists an interval $V$ around $z=0$ and a neighborhood $W$ around $(x,y)=(1,-1)$ and a class $C^1(V)$ function $G\colon V\to W$ such that $G(0)=(1-1)$ and $\forall z\in V\left(F(g_1(z), g_2(z), z)=0\right)$, where $g_1(z), g_2(z)$ denote the first and second entries, respectively, of $G(z)$, for all $z\in V$. (In analyst terms, $g_1(z)=x(z)$ and $g_2(z)=y(z)$).

One also finds

$$ \begin{pmatrix} \dfrac {\partial g_1}{\partial z}(z)\\ \dfrac {\partial g_2}{\partial z}(z) \end{pmatrix}=\\ -\begin{pmatrix} \dfrac{\partial f_1}{\partial x}(g_1(z), g_2(z), z) & \dfrac{\partial f_1}{\partial y}(g_1(z), g_2(z), z)\\ \dfrac{\partial f_2}{\partial x}(g_1(z), g_2(z), z) & \dfrac{\partial f_2}{\partial y}(g_1(z), g_2(z), z) \end{pmatrix}^{-1} \begin{pmatrix} \dfrac{\partial f_1}{\partial z}(g_1(z), g_2(z), z)\\ \dfrac{\partial f_2}{\partial z}(g_1(z), g_2(z), z) \end{pmatrix}_. $$

Now you can happily evaluate the RHS at $z=0$.