Upgoat or Downgoat?

Mathematica, 100%, 141 bytes

f@x_:=Count[1>0]@Table[ImageInstanceQ[x,"caprine animal",RecognitionThreshold->i/100],{i,0,50}];If[f@#>f@ImageReflect@#,"Up","Down"]<>"goat"&

Well, this feels more than a little like cheating. It's also incredibly slow as well as being very silly. Function f sees roughly how high you can set the Recognition threshold in one of Mathematica's computer vision builtins, and still recognise the image as a Caprine animal.

We then see whether the image or the flipped image is more goaty. Works on your profile image only because tie is broken in favour of downgoat. There are probably loads of ways this could be improved including asking it if the image represents Bovids or other generalisations of the Caprine animal entity type.

Answer as written scores 100% for the first testing set and 94% for the second testing set, as the algorithm yields an inconclusive result for goat 1. This can be raised back up to 100% at the expense of an even longer computational time by testing more values of RecognitionThreshold. Raising from 100 to 1000 sufficies; for some reason Mathematica thinks that's a very ungoaty image! Changing the recognition entity from Caprine animal to Hoofed Mammal also seems to work.

Ungolfed:

goatness[image_] := Count[
                      Table[
                        ImageInstanceQ[
                          image, Entity["Concept", "CaprineAnimal::4p79r"],
                          RecognitionThreshold -> threshold
                        ],
                        {threshold, 0, 0.5, 0.01}
                      ],
                      True
                    ]

Function[{image},
  StringJoin[      
    If[goatness[image] > goatness[ImageReflect[image]],
      "Up",
      "Down"
    ],
    "goat"
  ]
]

Alternative solution, 100% + bonus

g[t_][i_] := ImageInstanceQ[i, "caprine animal", RecognitionThreshold -> t]
f[i_, l_: 0, u_: 1] := Module[{m = (2 l + u)/3, r},
  r = g[m] /@ {i, ImageReflect@i};
  If[Equal @@ r,
   If[First@r, f[i, m, u], f[i, l, m]],
   If[First@r, "Up", "Down"] <> "goat"
   ]
  ]

This one uses the same strategy as before, but with a binary search over the threshold. There are two functions involved here:

  • g[t] returns whether or not its argument is a goaty image with threshold t.
  • f takes three parameters: an image, and an upper and lower bound on the threshold. It is recursive; it works by testing a threshold m between the upper and lower thresholds (biased towards the lower). If the image and the reflected image are both goaty or non-goaty, it eliminates the lower or upper part of the range as appropriate and calls itself again. Otherwise, if one image is goaty and the other is non-goaty, it returns Upgoat if the first image is goaty and Downgoat otherwise (if the second, reflected image is goaty).

The function definitions deserves a little explanation. First, function application is left-associative. This means that something like g[x][y] is interpreted as (g[x])[y]; "the result of g[x] applied to y."

Second, assignment in Mathematica is roughly equivalent to defining a replacement rule. That is, f[x_] := x^2 does not mean "declare a function named f with parameter x that returns x^2;" its meaning is closer to, "whenever you see something like f[ ... ], call the thing inside x and replace the whole thing with x^2."

Putting these two together, we can see that the definition of g is telling Mathematica to replace any expression of the form (g[ ... ])[ ... ] with the right-hand side of the assignment.

When Mathematica encounters the expression g[m] (in the second line of f), it sees that the expression does not match any rules that it knows and leaves it unchanged. Then it matches the Map operator /@, whose arguments are g[m] and the list {i, ImageReflect@i}. (/@ is infix notation; this expression is exactly equivalent to Map[g[m], { ... }].) The Map is replaced by applying its first argument to each element of its second argument, so we get {(g[m])[i], (g[m])[ ... ]}. Now Mathematica sees that each element matches the definition of g and does the replacement.

In this way we got g to act like a function that returns another function; that is, it acts roughly like we wrote:

g[t_] := Function[{i}, ImageInstanceQ[i, "caprine animal", RecognitionThreshold -> t]]

(Except in this case g[t] on its own evaluates to a Function, whereas before g[t] on its own was not transformed at all.)

The final trick I use is an optional pattern. The pattern l_ : 0 means "match any expression and make it available as l, or match nothing and make 0 available as l." So, if you call f[i] with one argument (the image to test) it is as if you had called f[i, 0, 1].

Here is the test harness I used:

gist = Import["https://api.github.com/gists/3fb94bfaa7364ccdd8e2", "JSON"];
{names, urls} = Transpose[{"filename", "raw_url"} /. Last /@ ("files" /. gist)];
images = Import /@ urls;
result = f /@ images
Tally@MapThread[StringContainsQ[##, IgnoreCase -> True] &, {names, result}]
(* {{True, 18}} *)

user = "items" /.
           Import["https://api.stackexchange.com/2.2/users/40695?site=codegolf", "JSON"];
pic = Import[First["profile_image" /. user]];
name = First["display_name" /. user];
name == f@pic
(* True *)

JavaScript, 93.9%

var solution = function(imageUrl, settings) {

  // Settings
  settings = settings || {};
  var colourDifferenceCutoff = settings.colourDifferenceCutoff || 0.1,
      startX = settings.startX || 55,
      startY = settings.startY || 53;

  // Draw the image to the canvas
  var canvas = document.createElement("canvas"),
      context = canvas.getContext("2d"),
      image = new Image();
  image.src = imageUrl;
  image.onload = function(e) {
    canvas.width = image.width;
    canvas.height = image.height;
    context.drawImage(image, 0, 0);

    // Gets the average colour of an area
    function getColour(x, y) {

      // Get the image data from the canvas
      var sizeX = image.width / 100,
          sizeY = image.height / 100,
          data = context.getImageData(
            x * sizeX | 0,
            y * sizeY | 0,
            sizeX | 0,
            sizeY | 0
          ).data;

      // Get the average of the pixel colours
      var average = [ 0, 0, 0 ],
          length = data.length / 4;
      for(var i = 0; i < length; i++) {
        average[0] += data[i * 4] / length;
        average[1] += data[i * 4 + 1] / length;
        average[2] += data[i * 4 + 2] / length;
      }
      return average;
    }

    // Gets the lightness of similar colours above or below the centre
    function getLightness(direction) {
      var centre = getColour(startX, startY),
          colours = [],
          increment = direction == "above" ? -1 : 1;
      for(var y = startY; y > 0 && y < 100; y += increment) {
        var colour = getColour(startX, y);

        // If the colour is sufficiently different
        if(
          (
            Math.abs(colour[0] - centre[0]) +
            Math.abs(colour[1] - centre[1]) +
            Math.abs(colour[2] - centre[2])
          ) / 256 / 3
          > colourDifferenceCutoff
        ) break;
        else colours.push(colour);
      }

      // Calculate the average lightness
      var lightness = 0;
      for(var i = 0; i < colours.length; i++) {
        lightness +=
          (colours[i][0] + colours[i][1] + colours[i][2])
          / 256 / 3 / colours.length;
      }

      /*
      console.log(
        "Direction:", direction,
        "Checked y = 50 to:", y,
        "Average lightness:", lightness
      );
      */
      return lightness;
    }

    // Compare the lightness above and below the starting point
    //console.log("Results for:", imageUrl);
    var above = getLightness("above"),
        below = getLightness("below"),
        result = above > below ? "Upgoat" : "Downgoat";
    console.log(result);
    return result;
  };
};
<div ondrop="event.preventDefault();r=new FileReader;r.onload=e=>{document.getElementById`G`.src=imageUrl=e.target.result;console.log=v=>document.getElementById`R`.textContent=v;solution(imageUrl);};r.readAsDataURL(event.dataTransfer.files[0]);" ondragover="event.preventDefault()" style="height:160px;border-radius:12px;border:2px dashed #999;font-family:Arial,sans-serif;padding:8px"><p style="font-style:italic;padding:0;margin:0">Drag & drop image <strong>file</strong> (not just link) to test here... (requires HTML5 browser)</p><image style="height:100px" id="G" /><pre id="R"></pre></div>

Explanation

Simple implementation of @BlackCap's idea of checking where the light is coming from.

Most of the goats are in the centre of their images, and their bellies are always darker than their backs because of the sunlight. The program starts at the middle of the image and makes a note of the colour. It then gets the average lightness of the pixels above and below the centre up to where the colour is different to the colour at the centre (when the body of the goat ends and the background starts). Whichever side is lighter determines whether it is an upgoat or a downgoat.

Fails for downgoat 9 and upgoats 7 and 9 in the second test case.


Java, 93.9% 100%

This works by determining the row contrast in the upper and lower part of the image. I assume that the contrast in the bottom half of the image is bigger for 2 reasons:

  • the 4 legs are in the bottom part
  • the background in the upper part will be blurred because it is usually the out-of-focus-area

I determine the contrast for each row by calculating the difference of neighboring pixel values, squaring the difference, and summing all squares.

Update

Some images from the second batch caused problems with the original algorithm.

upgoat3.jpg

This image was using transparency which was ignored previously. There are several possibilities to solve this problem, but I simply chose to render all images on a 400x400 black background. This has the following advantages:

  • handles images with alpha channel
  • handles indexed and grayscale images
  • improves performance (no need to process those 13MP images)

downgoat8.jpg/upgoat8.jpg

These images have exaggerated detail in the body of the goat. The solution here was to blur the image in vertical direction only. However, this generated problems with images from the first batch, which have vertical structures in the background. The solution here was to simply count differences which exceed a certain threshold, and ignore the actual value of the difference.

Shortly said, the updated algorithm looks for areas with many differences in images that after the preprocessing look like this:

enter image description here

import java.awt.Graphics2D;
import java.awt.RenderingHints;
import java.awt.image.BufferedImage;
import java.awt.image.Raster;
import java.io.File;
import java.io.IOException;

import javax.imageio.ImageIO;

public class UpDownGoat {
    private static final int IMAGE_SIZE = 400;
    private static final int BLUR_SIZE = 50;

    private static BufferedImage blur(BufferedImage image) {
        BufferedImage result = new BufferedImage(image.getWidth(), image.getHeight() - BLUR_SIZE + 1,
                BufferedImage.TYPE_INT_RGB);
        for (int b = 0; b < image.getRaster().getNumBands(); ++b) {
            for (int x = 0; x < result.getWidth(); ++x) {
                for (int y = 0; y < result.getHeight(); ++y) {
                    int sum = 0;
                    for (int y1 = 0; y1 < BLUR_SIZE; ++y1) {
                        sum += image.getRaster().getSample(x, y + y1, b);
                    }
                    result.getRaster().setSample(x, y, b, sum / BLUR_SIZE);
                }
            }
        }
        return result;
    }

    private static long calcContrast(Raster raster, int y0, int y1) {
        long result = 0;
        for (int b = 0; b < raster.getNumBands(); ++b) {
            for (int y = y0; y < y1; ++y) {
                long prev = raster.getSample(0, y, b);
                for (int x = 1; x < raster.getWidth(); ++x) {
                    long current = raster.getSample(x, y, b);
                    result += Math.abs(current - prev) > 5 ? 1 : 0;
                    prev = current;
                }
            }
        }
        return result;
    }

    private static boolean isUp(File file) throws IOException {
        BufferedImage image = new BufferedImage(IMAGE_SIZE, IMAGE_SIZE, BufferedImage.TYPE_INT_RGB);
        Graphics2D graphics = image.createGraphics();
        graphics.setRenderingHint(RenderingHints.KEY_INTERPOLATION, RenderingHints.VALUE_INTERPOLATION_BICUBIC);
        graphics.drawImage(ImageIO.read(file), 0, 0, image.getWidth(), image.getHeight(), null);
        graphics.dispose();
        image = blur(image);
        int halfHeight = image.getHeight() / 2;
        return calcContrast(image.getRaster(), 0, halfHeight) < calcContrast(image.getRaster(),
                image.getHeight() - halfHeight, image.getHeight());
    }

    public static void main(String[] args) throws IOException {
        System.out.println(isUp(new File(args[0])) ? "Upgoat" : "Downgoat");
    }
}