Implicit function theorem at a singular point?

This is a job for the Morse lemma. The second degree Taylor polynomial of $F(x,y)$ has the form $ax^2+by^2$ where $ab<0$. (You said $>0$ but that can't be what you meant.) The Morse Lemma says, for a sufficiently smooth function of several variables, that if it has zero constant and linear parts and a nondegenerate quadratic part then there is a change of variables making the function purely quadratic. In the case at hand (two variables and indefinite quadratic part) that means that $F=uv$ where $u(x,y)$ and $v(x,y)$ have no constant term and have linearly independent linear parts whose product is $ax^2+by^2$. Now the implicit function theorem applies to $u$ and to $v$ and you find that the solution set of $F(x,y)=0$ is locally the union of the graphs of two smooth functions, one with positive slope and one with negative slope.

EDIT Here is a proof of the Morse Lemma:

First, in one variable it says that if $f(0)=f_x(0)=0$ and $f_{xx}(0)\neq 0$ then in a neighborhood of $x=0$ $f(x)$ is plus or minus the square of a smooth function. This is true because the vanishing of $f(0)$ means that $f(x)=xg(x)$ for some smooth $g$, and the vanishing of $f_x(0)=g(0)$ means that $g(x)=xh(x)$ for some smooth $h$, and the nonvanishing of $f_{xx}(0)=2g_x(0)=2h(0)$ means that $h$ is locally plus or minus a square.

Now suppose that $F(x,y)$ is such that $F$, $F_x$, and $F_y$ vanish at $x=y=0$ but $F_{yy}$ does not, and suppose that the homogeneous quadratic part of the Taylor series is nondegenerate. The implicit function theorem applied to $F_y$ shows that $F_y=0$ along the graph of a smooth function $y=k(x)$ with $k(0)=0$. Define $f(x)=F(x,k(x))$. Then $F(x,y)-f(x)$ is such that both it and its $y$-derivative vanish along that graph. Therefore we may write $F(x,y)-f(x)=(y-k(x))^2H(x,y)$ for some smooth $H$. Furthermore $2H(0,0)=F_{yy}(0,0)\neq 0$, so $H$ is plus or minus the square of a smooth function. Now $F(x,y)$ is the sum of $F(x,y)-f(x)$, which is the square of a smooth function, and $f(x)$, which must also be plus or minus the square of a smooth function by the one-variable case.

And so on.


Scketch of an elementary proof. Assume $F\in C^2(\mathbb{R}^2, \mathbb{R})$ and (changing sign to $F$ if needed) $$F(0,0)=F_x(0,0)=F_y(0,0)=F_{xy}(0,0)=0\, ,$$ $$F_{xx}(0,0) <0\, ,\qquad F_{yy}(0,0) > 0\, .$$

Therefore there exist $\eta > 0$ and $\epsilon > 0$ such that $F_{yy}(x,y) > 0$ for $|x| \le \eta$ and $|y|\le\epsilon$. So for all $|x|\le \eta$ the function $y\mapsto F(x,y)$ is strictly convex on the interval $[-\epsilon,\epsilon]$. In particular $F(0,\pm\epsilon) >0$ because $F(0,0)=F_y(0,0)=0$. Since $F(0,\pm\epsilon) >0$ and $F_{xx}(0,0) < 0$, we also have by continuity $F(x,\pm\epsilon) >0$ and $F_{xx}(x,0) < 0$, for all $|x|\le\delta$ for some $0 < \delta\le\eta$; thus $F(x,0) < 0$ for $0 < |x|\le \delta$. Now, since for all $|x|\le\delta$ the function $y\mapsto F(x,y)$ is strictly convex on the interval $[-\epsilon,\epsilon]$, positive at $y=\pm\epsilon$ and negative at $y=0$, for any $0 < |x|\le\delta$ we have $F(x,y)=0$ exactly for one $0 < y < \epsilon$ and one $-\epsilon < y < 0$, always with $F _ y (x,y)\neq0$, while $F(0,y)=0$ exactly for $y=0$ if $|y|\le\epsilon$. This proves that the trace of the zero-set of $F$ on $[-\delta,\delta]\times [-\epsilon,\epsilon]$ is the union of the graphs of two functions, $y_+: [-\delta,\delta]\to [-\epsilon,\epsilon]$ and $y_-: [-\delta,\delta]\to [-\epsilon,\epsilon]$ defined so that $\operatorname{sgn} y _ + (x)=\operatorname{sgn} x$ and $\operatorname{sgn}y _ - (x)=-\operatorname{sgn} x$. Note that the fact that $\epsilon$ is arbitrary immediately implies that $y_+$ and $y_-$ are continuous at $x=0$ and vanish there. Actually, if we locate the zero-set of $F$ with a bit more care we also have that $y _ \pm (x) $ is derivable at $x=0$: this follows from the fact that $F$ satisfies an inequality locally at the origin: $$\big(F_{xx}(0,0)+o(1) \big)x^2/2 +\big( F_{yy}(0,0)+o(1) \big)y^2/2 \le F(x,y) $$

$$\le \big(F _ {xx} (0,0) +o(1) \big) x^2/2 + \big( F_{yy} (0,0) +o(1) \big)y^2/2$$

so that $(x,y _ \pm (x))$ belongs to a very thin cone around the line $y= \pm F_{xx}(0,0)/F_{yy}(0,0)x$, meaning that $y _ \pm (x)$ is derivable at $x=0$ and $ y ' _ \pm (0)= -\sqrt {- \frac { F _ {xx}(0,0) } { F_ {yy} (0,0) } }\, . $ Moreover, since for all $ (x,y) \in \{ F = 0 \} \cap [ -\delta, \delta] \times [-\epsilon,\epsilon] \setminus\{ (0,0)\}$ we have $F _ y(x,y)\neq 0$ the standard Implicit Function Theorem ensures that $y_+$ and $y_-$ are $C^0([ -\delta, \delta])\cap C^1([ -\delta, \delta]\setminus \{ 0 \}) $, with $$F_x(x,y_\pm(x))+F_y(x,y_\pm(x))\dot y_\pm(x)=0\quad , \quad \forall x\neq0 .$$ To prove that they are $C^1([ -\delta, \delta])$ note that for $|x|+|y|\to 0$ $$F_x(x,y)=F_{xx}(0,0)x+o(|x|+|y|)$$ $$F_y(x,y)=F_{yy}(0,0)y+o(|x|+|y|)\, .$$ Therefore we have $$F _ {xx}(0,0)+y' _ \pm (x)\frac{y _ \pm(x)}{x}F _ {yy}(0,0)=\Big(1+|y' _ \pm(x)|\Big)\Big(1+\big|\frac{y _ \pm(x)}{x}\big|\Big)o(1)\, .$$ Since $\frac{y_\pm(x)}{x}=y_\pm'(0)+o(1)$, and $\pm y'_ \pm(x)\ge0$ this implies that

$y^\prime _ \pm (x) \to y^\prime _ \pm (0) $ for $ x \to 0 $ , so $ y^\prime _ \pm $ is continuous and $ y _ \pm \in C^1 ([ - \delta , \delta ] ) $ .