Can we generalize relativistic expressions found in specific frames, to arbitrary frames?

This can't happen, because if a tensor is zero in one frame it is zero in any frame. Suppose you have some expression which works in a specific frame, and you know that the tensor $T$ reduces to that in your frame. Now suppose that the tensor $S$ also reduces to your expression. Then the tensor $T-S$ is zero in our special frame, which means that it is identically zero and hence $T=S$.


This is a great question and as per usual the answer is that Sean Carroll is absolutely right. :) Javier has given a great reason why this is sort of the obvious expectation but I sometimes find that these questions point to a much deeper conceptual confusion so I will try to tell you how it works out "the hard way" in the hopes that it will help rectify any conceptual confusions.

To see this requires what may be a different perspective on differential geometry than you're used to and what tensor fields are, but let's work through it. What does it mean to have tensor fields on some manifold $\mathcal M$?

Scalar fields

We start with the scalar fields, as always. This set $\mathcal S \subseteq (\mathcal M \to \mathbb R)$ is not usually equal to the full set of functions from $\mathcal M \to \mathbb R$ because we usually want to only consider scalars which smoothly vary over the input set; then you get into a big conundrum about how you define "smoothly vary" when you don't necessarily know the structure of this set $\mathcal M$ and what do you do with that?! A nice definition is to say that you can reinterpret any smooth function from $\mathbb R^n$ to $\mathbb R$ (they'd say, any function in $C^\infty(\mathbb R^n,~\mathbb R)$) as a function from $\mathcal S^n \to (\mathcal M \to \mathbb R)$ by applying it "pointwise". I like to use square braces to denote when you're using it the second way, parentheses when you're using it the first way: in other words, $$f[s_1,~ s_2,~ \dots s_n] = p \mapsto f\big(s_1(p),~s_2(p),~\dots s_n(p)\big).$$We can then refine this as a closure axiom on $\mathcal S$ to state that all smooth functions actually map $\mathcal S^n \to \mathcal S$, they all map into this "smooth scalar fields" subset, for all $n$. It turns out that this is enough to define addition $(+)$ and multiplication $(\cdot)$ on scalar fields, as well as constant fields and a topology on $\mathcal M$ such that all of the functions of $\mathcal S$ are continuous. It helps to have a name for this "reinterpret a smooth function on the reals as a function on the scalar fields," so I call such a thing an $n$-functor, in part because there's a cute category diagram out there.

So as a control example, when we're dealing with the 2-sphere (the boundary of the 3-ball) we generally admit that the 3D coordinates $x$ and $y$ and $z$ are allowed scalar fields on the 2-sphere, and then we close over all $n$-functors for all $n$, and we have a manifold.

Vectors

Then we lay in with the vector fields, and mathematicians usually define these as the set $\mathcal V$ of derivations, which are the linear maps from $\mathcal S \to \mathcal S$ that obey the Leibniz law, which says that for every $n$-functor $f$ such a derivation $V$ must obey $$V~\big(f[s_1,~s_2,~\dots s_n]\big) = \sum_{i=1}^n \partial_i f[s_1,~s_2,~\dots s_n]\cdot V s_i,$$where here $\partial_1,\partial_2,\dots$ just means "partial derivative with respect to the first (resp. second, third, ...) argument keeping the other arguments constant."

"How are those vectors?! I'm supposed to have components for my vectors!" I hear you asking. Well, yes: one of our axioms that makes $\mathcal M$ into a $D$-dimensional manifold is that about any point in $\mathcal M$ there will be a neighborhood (in the sense of the above-induced topology) where all points can be distinguished by at least one of $D$ scalar fields, called "coordinate fields", and all scalar fields can be expressed as a $D$-functor of the coordinate fields.

So, given the coordinate fields $\hat c_1,~\hat c_2,~\dots \hat c_D$ you can thereby define the partial derivatives with respect to them in this neighborhood, $\hat \partial_1,~\hat \partial_2,~\dots \hat \partial_D,$ and in this neighborhood they obey the Leibniz law above. Then I claim that $\hat v_i = V \hat c_i$ are scalar fields which work as components for the vector field within this neighborhood, and that in the neighborhood $V = \sum_{i=1}^D \hat v_i ~ \hat \partial_i.$ So $V$ has this global character but when you get down to what it is in any neighborhood, you still get your familiar vector components!

Nevertheless as you can tell this is a great definition because it makes geometry wonks happy: each "vector field" in $\mathcal V$ clearly has an objective existence everywhere and it merely "happens" to be described by these particular numbers in this particular neighborhood.

Tensor fields

Ok, now we have one more slight excursion because we want covariant and contravariant vectors. We defined a vector field above; a "covector field" is simply a linear mapping from $\mathcal V \to \mathcal S.$ When we add the axioms for a metric we get that this space is isomorphic to $\mathcal V,$ with the metric tensor providing a canonical translation between the two. Still, it pays to keep the two spaces separate, so call $\mathcal V^\bullet$ the usual vector space and call $\mathcal V_\bullet$ the covector space.

Finally we can define the tensor fields; this comes with a definition and an axiom. First the definition: an $[m,n]$-tensor field is any linear mapping from $m$ copies of $V_\bullet$ and $n$ copies of $V^\bullet$ to $\mathcal S.$ Simple enough, right? But this is actually extremely powerful: you will see many proofs that say "hey, this is a linear mapping from a $[0,2]$ tensor to a $[1,1]$ tensor and therefore it is actually some $[3,1]$ tensor." Really rich stuff.

Second, the notation. We can now use "abstract index notation" as a way of keeping all of those "copies of $V^\bullet$ and $V_\bullet$" straight. Basically we just replace the $\bullet$ holes with any symbol you want, as long as we can tell them apart for different copies. Then when we want to point to "this covector lives in that space" we also provide it with a corresponding superscript or subscript: so $v^a$ is a vector field living in the copy of $\mathcal V^\bullet$ that we call $\mathcal V^a.$ Then for example we can define the tensor space $\mathcal T^{abc}_{de}$ as this space of linear maps from $\mathcal V_a \times \mathcal V_b \times \mathcal V_c \times \mathcal V^d \times \mathcal V^e$ to $\mathcal S.$ All of the theory work is written off with this convenient notation.

The backwards nature of the indices on $\mathcal T$ is because we have a straightforward embedding of $\mathcal V^a$ in $\mathcal T^a$ mapping covectors in $\mathcal V_a$ to scalars in $\mathcal S.$ Furthermore we have a straighforward outer product; we take some vector $u^a$ and some covector $v_b$ and form a tensor $u^a v_b$ living in $\mathcal T^a_b.$

Now we get to the axiom. We need this to actually allow that $[3,1]$ tensor we talked about above to actually act on that $[0, 2]$ tensor, as well as to allow contractions from $[m+1, n+1]$-tensors down to $[m, n]$-tensors. So we need a way to reduce tensors to vectors and covectors. The axiom is that any $[m, n]$-tensor can be written as a finite sum of outer-products of a bunch of other vectors and covectors, $$T^{ab\dots k}_{\ell m\dots z} = \sum_{\text{Greek}} \alpha^a~ \beta^b~ \dots ~\kappa^k~ \lambda_\ell~ \mu_m~ \dots~ \zeta_z.$$ With this axiom is much easier to say "let's contract over this index, I have an index $k$ above and an index $\ell$ below, but I will do this expansion and then apply the covector in space $\mathcal V_\ell$ to the vector in space $\mathcal V^k$, joining those two spaces into a scalar multiple and getting a tensor in $\mathcal T^{ab\dots j}_{mn\dots z}$ left over." Similarly we can immediately understand how to make one tensor operate on another tensor because secretly they're all made out of these vectors.

If it helps, think of making an $M$ by $N$ matrix out of fundamental $M$ by $N$ matrices that have a 0 for all indices except for one where they have some real number, you can use a finite sum to make your desired matrix. That's what I'm asserting is possible with the above axiom.

And this axiom is also the answer to your question. Given that the axiom forces a coordinate-independent tensor to have a coordinate-independent splitting in terms of these vectors and covectors, when we use the other axiom to introduce coordinate fields then we have components (we apply the covector parts to $\hat c_i$ and the vector parts to $\hat \partial_j,$) and given these components we can identify a unique vector or covector.

The only subtlety here is that like with the sphere, the choice of scalar fields $x$ and $y$ as "coordinates" only works on a specific neighborhood, one hemisphere of the overall sphere. You might prefer spherical coordinates $\phi, \theta$ but $\theta$ is discontinuous and is therefore not a scalar field! But I mean, $\theta$ is a good counterexample too; it is not a scalar field but it is smooth over one hemisphere. So there is another axiom which says basically, "if I can make a function $\mathcal M to \mathbb R$ piecewise over overlapping pieces spanning the entire space, which appears to be smooth on each piece, then that function is a scalar field," which helps us resolve this issue. $\theta$ starts to disagree with itself or else become discontinuous as you try to spread its definition around the sphere.

Given that axiom, yes, the tensor with those components over the whole space really is the only tensor with those components.