Divide Two Numbers Using Long Division

C

Not exactly a long division - this answer uses the method used in the real old days.

#include <stdio.h>
#include <stdlib.h>
int main() {
    int a,b;
    scanf("%d %d", &a, &b);
    int *p=calloc(b, sizeof(int));
    int *q=p;
    while(a--) {
        (*p)++;
        if(p-q<b-1) p++;
        else p-=b-1;
    }
    p=q;
    int r=0, i;
    for(i=0; i<b; i++) r+=p[i]-p[b-1];
    printf("%d %d\n", p[b-1], r);
    return 0;
}

Explanation:

Suppose you are given a number of sheep and you need to split them up into b number of groups. The method used here is to assign each sheep into a different group until the total number of groups reaches b, then start from the first group again. This repeats until there are no more sheep. Then, the quotient will be the number of sheep in the last group, and the remainder will be the sum of the differences between each group and the last group.

An illustration for 8/3:

       |Group 1 | Group 2 | Group 3
-------------------------------------
       | 1      | 2       | 3        // first sheep in group 1, second sheep in group 2, etc
       | 4      | 5       | 6
       | 7      | 8       |
-------------------------------------
total: | 3      | 3       | 2

So the quotient is 2 and the remainder is (3-2)+(3-2)=2.


Bash + coreutils

Forget what you learned in school. Nobody uses long division. Its always important to chose the right tool for the job. dd is known by some as the swiss army knife of the command-line tools, so it really is the right tool for every job!:

#!/bin/bash

q=$(dd if=/dev/zero of=/dev/null ibs=$1 count=1 obs=$2 2>&1 | grep out | cut -d+ -f1)
r=$(( $1 - $(dd if=/dev/zero of=/dev/null bs=$q count=$2 2>&1 | grep bytes | cut -d' ' -f1) ))
echo $q $r

Output:

$ ./divide.sh 4 2
2 0
$ ./divide.sh 7182 15
478 12
$ 

Sorry, I know this is a subversive, trolly answer, but I just couldn't resist. Cue the downvotes...


C

Long division! At least how a standard computer algorithm might do it, one binary digit (bit) at a time. Handles negatives, too.

#include <stdio.h>

#define INT_BITS (sizeof(int)*8)

typedef struct div_result div_result;
struct div_result {
    int quotient;
    int remainder;
};

div_result divide(int dividend, int divisor) {
    int negative = (dividend < 0) ^ (divisor < 0);

    if (divisor == 0) {
        result.quotient = dividend < 0 ? INT_MIN : INT_MAX;
        result.remainder = 0;
        return result;
    }

    if ((dividend == INT_MIN) && (divisor == -1)) {
        result.quotient = INT_MAX;
        result.remainder = 0;
        return result;
    }

    if (dividend < 0) {
        dividend = -dividend;
    }
    if (divisor < 0) {
        divisor = -divisor;
    }

    int quotient = 0, remainder = 0;

    for (int i = 0; i < sizeof(int)*8; i++) {
        quotient <<= 1;

        remainder <<= 1;
        remainder += (dividend >> (INT_BITS - 1)) & 1;
        dividend <<= 1;

        if (remainder >= divisor) {
            remainder -= divisor;
            quotient++;
        }
    }

    div_result result;
    if (negative) {
        result.quotient = -quotient;
        result.remainder = -remainder;
    } else {
        result.quotient = quotient;
        result.remainder = remainder;
    }
    return result;
}

int main() {
    int dividend, divisor;
    scanf("%i%i", &dividend, &divisor);

    div_result result = divide(dividend, divisor);
    printf("%i %i\r\n", result.quotient, result.remainder);
}

It can be seen in action here. I chose to handle negative results to be symmetrical to positive results, but with both the quotient and remainder negative.

Handling of edge cases is done with best effort. Division by zero returns the integer of highest magnitude with the same sign as the dividend (that's INT_MIN or INT_MAX), and INT_MIN / -1 returns INT_MAX.