Generate all binary strings of length n with k bits set

Python

import itertools

def kbits(n, k):
    result = []
    for bits in itertools.combinations(range(n), k):
        s = ['0'] * n
        for bit in bits:
            s[bit] = '1'
        result.append(''.join(s))
    return result

print kbits(4, 3)

Output: ['1110', '1101', '1011', '0111']

Explanation:

Essentially we need to choose the positions of the 1-bits. There are n choose k ways of choosing k bits among n total bits. itertools is a nice module that does this for us. itertools.combinations(range(n), k) will choose k bits from [0, 1, 2 ... n-1] and then it's just a matter of building the string given those bit indexes.

Since you aren't using Python, look at the pseudo-code for itertools.combinations here:

http://docs.python.org/library/itertools.html#itertools.combinations

Should be easy to implement in any language.


This method will generate all integers with exactly N '1' bits.

From https://graphics.stanford.edu/~seander/bithacks.html#NextBitPermutation

Compute the lexicographically next bit permutation

Suppose we have a pattern of N bits set to 1 in an integer and we want the next permutation of N 1 bits in a lexicographical sense. For example, if N is 3 and the bit pattern is 00010011, the next patterns would be 00010101, 00010110, 00011001, 00011010, 00011100, 00100011, and so forth. The following is a fast way to compute the next permutation.

unsigned int v; // current permutation of bits
unsigned int w; // next permutation of bits

unsigned int t = v | (v - 1); // t gets v's least significant 0 bits set to 1
// Next set to 1 the most significant bit to change,
// set to 0 the least significant ones, and add the necessary 1 bits.
w = (t + 1) | (((~t & -~t) - 1) >> (__builtin_ctz(v) + 1));

The __builtin_ctz(v) GNU C compiler intrinsic for x86 CPUs returns the number of trailing zeros. If you are using Microsoft compilers for x86, the intrinsic is _BitScanForward. These both emit a bsf instruction, but equivalents may be available for other architectures. If not, then consider using one of the methods for counting the consecutive zero bits mentioned earlier. Here is another version that tends to be slower because of its division operator, but it does not require counting the trailing zeros.

unsigned int t = (v | (v - 1)) + 1;
w = t | ((((t & -t) / (v & -v)) >> 1) - 1);

Thanks to Dario Sneidermanis of Argentina, who provided this on November 28, 2009.


Forget about implementation ("be it done with strings" is obviously an implementation issue!) -- think about the algorithm, for Pete's sake... just as in, your very first TAG, man!

What you're looking for is all combinations of K items out of a set of N (the indices, 0 to N-1 , of the set bits). That's obviously simplest to express recursively, e.g., pseudocode:

combinations(K, setN):
  if k > length(setN): return "no combinations possible"
  if k == 0: return "empty combination"
  # combinations including the first item:
  return ((first-item-of setN) combined combinations(K-1, all-but-first-of setN))
   union combinations(K, all-but-first-of setN)

i.e., the first item is either present or absent: if present, you have K-1 left to go (from the tail aka all-but-firs), if absent, still K left to go.

Pattern-matching functional languages like SML or Haskell may be best to express this pseudocode (procedural ones, like my big love Python, may actually mask the problem too deeply by including too-rich functionality, such as itertools.combinations, which does all the hard work for you and therefore HIDES it from you!).

What are you most familiar with, for this purpose -- Scheme, SML, Haskell, ...? I'll be happy to translate the above pseudocode for you. I can do it in languages such as Python too, of course -- but since the point is getting you to understand the mechanics for this homework assignment, I won't use too-rich functionality such as itertools.combinations, but rather recursion (and recursion-elimination, if needed) on more obvious primitives (such as head, tail, and concatenation). But please DO let us know what pseudocode-like language you're most familiar with! (You DO understand that the problem you state is identically equipotent to "get all combinations of K items out or range(N)", right?).


This C# method returns an enumerator that creates all combinations. As it creates the combinations as you enumerate them it only uses stack space, so it's not limited by memory space in the number of combinations that it can create.

This is the first version that I came up with. It's limited by the stack space to a length of about 2700:

static IEnumerable<string> BinStrings(int length, int bits) {
  if (length == 1) {
    yield return bits.ToString();
  } else {
    if (length > bits) {
      foreach (string s in BinStrings(length - 1, bits)) {
        yield return "0" + s;
      }
    }
    if (bits > 0) {
      foreach (string s in BinStrings(length - 1, bits - 1)) {
        yield return "1" + s;
      }
    }
  }
}

This is the second version, that uses a binary split rather than splitting off the first character, so it uses the stack much more efficiently. It's only limited by the memory space for the string that it creates in each iteration, and I have tested it up to a length of 10000000:

static IEnumerable<string> BinStrings(int length, int bits) {
  if (length == 1) {
    yield return bits.ToString();
  } else {
    int first = length / 2;
    int last = length - first;
    int low = Math.Max(0, bits - last);
    int high = Math.Min(bits, first);
    for (int i = low; i <= high; i++) {
      foreach (string f in BinStrings(first, i)) {
        foreach (string l in BinStrings(last, bits - i)) {
          yield return f + l;
        }
      }
    }
  }
}