# Why can't we define more Maxwell's relations?

Let's get to the concept of *differential $1$-form* in a friendly way.

Imagine a generic vector field $\vec{v}(\vec{x})$ over $\Bbb R^n$. It is defined by the $n$ functions that specify its components as functions of position: $$v_i(\vec{x}), \qquad \mathrm{with} \quad i=1,...,n. \tag{1}$$

Now let's take a continuous path $\Gamma$ in $\Bbb R^n$. We can calculate the integral of $\vec{v} \cdot d\vec{l}$ along this curve: $$ \mathrm I (\Gamma) = \int_\Gamma \vec{v}(\vec{x}) \cdot d\vec{l} = \int_\Gamma \left[ v_1(\vec{x})dx_1 + v_2(\vec{x})dx_2 + ... + v_n(\vec{x})dx_n \right] = \int_\Gamma \sum_{i=1}^n v_i(\vec{x})dx_i \tag{2} $$

To get a feel of the significance of the quantity $ \mathrm I (\Gamma) $, consider a few examples:

If $\vec{v}$ was a force field $\vec{F}$ in $\Bbb R^3$, then $ \mathrm I (\Gamma) $ would be the work done by field $\vec{F}$ on a particle moving along trajectory $\Gamma$.

If $\vec{v}$ was the velocity field in a fluid and $\Gamma$ was a closed curve, $ \mathrm I (\Gamma) $ would be the circulation of the fluid along said curve.

If $\vec{v}$ was the electrostatic field $\vec{E}$ instead, $ \mathrm I (\Gamma) $ would be the difference of electric potential $\Delta V$ between the initial and final points of $\Gamma$.

If $\vec{v}$ was the gradient of a function $f$ over $\Bbb R^n$, so that $\vec{v}(\vec{x}) = \nabla f (\vec{x})$, then $ \mathrm I (\Gamma) $ would be the difference $f(\vec{x}_f) - f (\vec{x}_i)$ of the function $f$ between the final point $\vec{x}_f$ and the initial point $\vec{x}_i$ of curve $\Gamma$.

Coming back to our line of thought, we say that any expression of the form
$$ v_1(\vec{x})dx_1 + v_2(\vec{x})dx_2 + ... + v_n(\vec{x})dx_n = \sum_{i=1}^n v_i(\vec{x})dx_i \tag{3} $$
is a *differential $1$-form* over $\Bbb R^n$. You can see it as the most general "thing" whose integration along a path $\Gamma$ in $\Bbb R^n$ is meaningful.

Please note that you can use nearly *any* set of $n$ functions $\left\{ v_i(\vec{x}) \right\}$ to define a differential 1-form. These functions only need to satisfy some regularity condition in order to guarantee that integral $(2)$ is well defined.

Now, let's consider again example 4. In this case $\vec{v}(\vec{x}) = \nabla f (\vec{x})$, so that our differential form is
$$ \sum_{i=1}^n v_i(\vec{x})dx_i = \sum_{i=1}^n \frac{\partial f}{\partial x_i}(\vec{x})dx_i = df \tag{4}$$
When this happens (that is, when the functions $v_i(\vec{x})$ can be seen as the partial derivatives of some function $f$), we say that this is a *perfect differential*. **This happens if and only if $\mathrm I (\Gamma)$ is zero for every closed path $\Gamma$**.

Let's see why this is true. If $ \sum_{i=1}^n v_i(\vec{x})dx_i $ is a perfect differential, then $\mathrm I (\Gamma)$ is the difference $f(\vec{x}_f) - f (\vec{x}_i)$ from example 4, but since $\Gamma$ is closed $\vec{x}_f$ and $\vec{x}_i$ are the same point, so that this difference must be zero. Taking the other way around, let's assume that $\mathrm I (\Gamma)$ is zero for every closed path $\Gamma$. Then we can define a function $f$ so that condition $(4)$ is fulfilled. We do it in this way: let's choose arbitrarily a point $\vec{x}_0$; then we define $f(\vec{x})$ as the integral $(2)$ for a path $\Gamma$ leading from $\vec{x}_0$ to $\vec{x}$. There are infinite paths leading from $\vec{x}_0$ to $\vec{x}$, but nevertheless $f$ is well defined because the result will be the same for any such path: considering two such paths, take the integral over the circuit formed by following the first path from $\vec{x}_0$ to $\vec{x}$ and then the second path in reverse from $\vec{x}$ to $\vec{x}_0$; by hypothesis, this integral must be zero because it is taken over a closed path. This means that the integral along the first path must be the same as the integral over the second path, in order to cancel out over the circuit. So $f$ is well defined if and only if $\mathrm I (\Gamma)$ is zero for every closed path $\Gamma$. *Quod erat demonstrandum*.

Now let's point out a very important fact: given an arbitrary set of functions $v_i(\vec{x})$, the integral $(2)$ taken over a closed path is, in general, not zero (e.g., look at example 2, or at example 1 for a non-conservative force field). This means that **not every differential form is a perfect differential**.

Finally, I can now address your question. I'm sorry it took me a long time to get here.

Consider the energy differential $dU = TdS - PdV$. We know that this is a perfect differential because we explicitly obtained it by differentiating function $U$. Maxwell relations hold precisely because $T$ and $-P$ are partial derivatives of $U$.

On the other hand, you don't know *a priori* if the differential $dX = TdS + PdV$ is perfect, so you don't know if the function $X$ is well defined. Then you can't be sure that $T$ and $P$ are the partial derivatives of a function $X$, and therefore you can't deduce a Maxwell relation from differential $dX$.

As a matter of fact, it happens that trying to deduce a Maxwell relation in this way leads to a contradiction, as you found out, so we can be sure *a posteriori* that $dX$ is not a perfect differential and thus function $X$ is not well defined. In other words, no function can have $T$ and $P$ as its partial derivatives.

**EDIT: Some extra clarification**

In order to check if a differential form, e.g. $v_1dx_1 + v_2dx_2$, is perfect we need some information about the functions $v_i$, e.g. $v_1(x_1,x_2)$ and $v_2(x_1,x_2)$. For example, if we explicitly *know* these functions, then we can explicitly check if the integral over a closed path is always zero. That's a sufficient and necessary condition.

In the lucky case in which we know the functions $v_i(\vec{x})$, this check can be made even easier, thanks to a theorem that I'll state in a while. First, let's define what a *closed* differential 1-form is: it is a differential 1-form for which these equalities hold:
$$\frac{\partial v_i}{\partial x_j} = \frac{\partial v_j}{\partial x_i}$$
for every $i$ and $j$. That is, a closed differential 1-form is a differential 1-form for which the analogous of Maxwell's relations holds. Obviously an exact differential is always closed. Does it work also in the reverse direction? Well, kind of. The theorem states that if a differential form is closed in an open star domain, then it is exact in that domain (the function $f$ is well defined in that domain, but we can't say with certainty if it can be extended outside).

Unfortunately, in the case of the differential $dX = TdS + PdV$ the kind of checks I just described are not possible, because we don't know $T$ and $P$ explicitly as functions of $S$ and $V$. This is because the thermodynamic framework in which we are reasoning is very general, while these two functions are different from physical system to physical system. Therefore, we have to find information about functions $T$ and $P$ in some other way.

If we want to keep things general (that is, we do not want to introduce a specific system) we have to use general facts about $T$ and $P$, true for every system. One such fact (a pretty fundamental one) is that every system has an energy $U$ that depends on $S$ and $V$, and $T$ and $-P$ are *defined* to be its partial derivatives. This fact can in a sense be proved using Statistical Mechanics, but in Thermodynamics we just postulate it, because it derives from the microscopical details of matter, which lay beyond the scope of Thermodynamics.

Once that we accepted this, which is equivalent to postulating that $U(S,V)$ exists and $dU = TdS-PdV$, then we can use the existence of $U$ and the fact that $T$ and $-P$ are its derivatives to prove that $X$ is not well defined. I did it using Maxwell relations for a *reductio ad absurdum* proof, while @jacob1729 did it in a different way in his answer, but the substance doesn't change: we have to use the *postulated* existence of function $U$, with its postulated properties. Otherwise we don't know anything about the functions $T$ and $P$ and can't check anything about $dX$.

The other answer is good, but it seems to stop short of actually explaining why:

$$dX = TdS + pdV$$

cannot be an exact differential. This is because $dU=TdS-pdV$ is exact and so if $dX$ were we would also have:

$$d(X-U)=2pdV$$

being exact. But you can check for yourself that it isn't.