Eigenvalue bound for quadratic maximization with linear constraint

The following analysis explores various approaches to the problem, but ultimately fails to produce a satisfactory solution.

One of the constraints can be rewritten using the nullspace projector of $b$ $$\eqalign{ P &= \Big(I-(b^T)^+b^T\Big) = \left(I-\frac{bb^T}{b^Tb}\right) \;=\; I-\beta bb^T \\ Pb &= 0,\qquad P^2=P=P^T \\ }$$ and the introduction of an unconstrained vector $y$ $$\eqalign{ b^Tx &= a \\ x &= Py + (b^T)^+a \\ &= Py + a\beta b \\ &= Py + \alpha_0 b \\ }$$ The remaining constraint can be absorbed into the definition of the objective function itself $$\eqalign{ \lambda &= \frac{x^TBx}{x^Tx} \;=\; \frac{y^TPBPy +2\alpha_0y^TPBb +\alpha_0^2\,b^TBb}{y^TPy +\alpha_0^2\,b^Tb} \;=\; \frac{\theta_1}{\theta_2} \tag{0} \\ }$$ The gradient can be calculated by a straightforward (if tedious) application of the quotient rule as $$\eqalign{ \frac{\partial\lambda}{\partial y} &= \frac{2\theta_2(PBPy +\alpha_0PBb)-2\theta_1Py} {\theta_2^2} \\ }$$ Setting the gradient to zero yields $${ PBPy +\alpha_0PBb = \lambda Py \tag{1} \\ }$$ which can be rearranged into a generalized eigenvalue equation. $$\eqalign{ PB\left(Py+\alpha_0b\right) &= \lambda Py \\ PBx &= \lambda Px \tag{2} \\ }$$ Note that multiplying the standard eigenvalue equation $$\eqalign{ Bx &= \lambda x \tag{3} \\ }$$ by $P$ reproduces equation $({2})$. So both standard and generalized eigenvalues are potential solutions.

Unlike the discrete $\lambda$ values yielded by the eigenvalue methods, equation $({1})$ is solvable for a continuous range of $\lambda$
$$\eqalign{ y &= \alpha_0(\lambda P-PBP)^+PBb \\ }$$ and produces a $y$ vector which satisfies the zero gradient condition $({1})$.

Unfortunately, none of these approaches yields a solution which satisfies all of the constraints.

But solving equation $(0)$ for an optimal $y$ vector is still the appropriate goal, and requires a numerical rather than an analytical approach.