Why does Latex make a small adjustment when I change section color

I want to change the color of a section of a document ... one section at a time.

If you prefer using high-level commands and letting LaTeX perform most of the hard work for you "behind the scenes", while not having to be concerned with subtle positional shifts, I recommend loading the sectsty package and using its \sectionfont command. As the following example shows, one can employ \sectionfont{\color{...}} multiple times.

enter image description here

\documentclass{article}
\usepackage{sectsty} % for "\sectionfont" macro
\usepackage[dvipsnames]{xcolor}

\begin{document}

\sectionfont{\color{RubineRed}}
\section{Introduction}

\sectionfont{\color{Dandelion}}
\section{Literature}

\sectionfont{\color{Aquamarine}}
\section{Data}

\sectionfont{\color{PineGreen}}
\section{Methods}

\sectionfont{\color{NavyBlue}}
\section{Results}

\sectionfont{\color{Violet}}
\section{Conclusion}

\end{document}

As you can check with \showoutput there are no spacing differences in the supplied example in the question with and without \color{red} so it isn't an example of this problem...

However the issue can occur (and is shown in the first posted gif with the centred abstract).

A real example is

\documentclass{article} % For LaTeX2e
\usepackage{xcolor}
\showoutput
\showboxdepth3
\title{Generative Adversarial Nets}

\begin{document}

\begin{center}
  zzzzz
\end{center}

{%\color{red}
\section{Next Section}}

This framework can yield specific training algorithms for many kinds of model and optimization
algorithm. In this article, we explore the special case when the generative model generates
samples by passing random noise through a multilayer perceptron, and the discriminative model
is also a multilayer perceptron. We refer to this special case as {\em adversarial nets}.
In this case, we can train both models using only
the highly successful backpropagation and dropout algorithms
and sample from the generative
model using only forward propagation. No approximate inference or Markov chains are necessary.


\end{document}

If you uncomment the colour command you get additional space before the heading as the colour whatsit node prevents the section command "seeing" the vertical space already added by the \end{center} and so it adds its full specified amount rather than merging the spaces specified by \end{center} and \section.

The trick is to do colour changes as far as possible in horizontal not vertical mode, so for example this works

\documentclass{article} % For LaTeX2e
\usepackage{xcolor}
\showoutput
\showboxdepth3
\title{Generative Adversarial Nets}

\begin{document}
\color{red}

\begin{center}
  \textcolor{black}{zzzzz}
\end{center}


\section{Next Section}

\mbox{}\color{black}This framework can yield specific training algorithms for many kinds of model and optimization
algorithm. In this article, we explore the special case when the generative model generates
samples by passing random noise through a multilayer perceptron, and the discriminative model
is also a multilayer perceptron. We refer to this special case as {\em adversarial nets}.
In this case, we can train both models using only
the highly successful backpropagation and dropout algorithms
and sample from the generative
model using only forward propagation. No approximate inference or Markov chains are necessary.


\end{document}

Or perhaps nicer the following which just locally tacks the color on to the \Large used by the section heading

\documentclass{article} % For LaTeX2e
\usepackage{xcolor}
\showoutput
\showboxdepth3
\let\realLarge\Large
\title{Generative Adversarial Nets}

\begin{document}


\begin{center}
  zzzzz
\end{center}

%\def\Large{\realLarge\color{red}}
\section{Next Section}
%\let\Large\realLarge

This framework can yield specific training algorithms for many kinds of model and optimization
algorithm. In this article, we explore the special case when the generative model generates
samples by passing random noise through a multilayer perceptron, and the discriminative model
is also a multilayer perceptron. We refer to this special case as {\em adversarial nets}.
In this case, we can train both models using only
the highly successful backpropagation and dropout algorithms
and sample from the generative
model using only forward propagation. No approximate inference or Markov chains are necessary.


\end{document}

Tags:

Color