How do I convert a directory of jpeg images to TFRecords file in tensorflow?

Tensorflow's inception model has a file build_image_data.py that can accomplish the same thing with the assumption that each subdirectory represents a label.


I hope this helps:

filename_queue = tf.train.string_input_producer(['/Users/HANEL/Desktop/tf.png']) #  list of files to read

reader = tf.WholeFileReader()
key, value = reader.read(filename_queue)

my_img = tf.image.decode_png(value) # use decode_png or decode_jpeg decoder based on your files.

init_op = tf.initialize_all_variables()
with tf.Session() as sess:
  sess.run(init_op)

# Start populating the filename queue.

coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)

for i in range(1): #length of your filename list
  image = my_img.eval() #here is your image Tensor :) 

print(image.shape)
Image.show(Image.fromarray(np.asarray(image)))

coord.request_stop()
coord.join(threads)

For getting all images as an array of tensors use the following code example.

Github repo of ImageFlow


Update:

In the previous answer I just told how to read an image in TF format, but not saving it in TFRecords. For that you should use:

def _int64_feature(value):
  return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))


def _bytes_feature(value):
  return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))

# images and labels array as input
def convert_to(images, labels, name):
  num_examples = labels.shape[0]
  if images.shape[0] != num_examples:
    raise ValueError("Images size %d does not match label size %d." %
                     (images.shape[0], num_examples))
  rows = images.shape[1]
  cols = images.shape[2]
  depth = images.shape[3]

  filename = os.path.join(FLAGS.directory, name + '.tfrecords')
  print('Writing', filename)
  writer = tf.python_io.TFRecordWriter(filename)
  for index in range(num_examples):
    image_raw = images[index].tostring()
    example = tf.train.Example(features=tf.train.Features(feature={
        'height': _int64_feature(rows),
        'width': _int64_feature(cols),
        'depth': _int64_feature(depth),
        'label': _int64_feature(int(labels[index])),
        'image_raw': _bytes_feature(image_raw)}))
    writer.write(example.SerializeToString())

More info here

And you read the data like this:

# Remember to generate a file name queue of you 'train.TFRecord' file path
def read_and_decode(filename_queue):
  reader = tf.TFRecordReader()
  _, serialized_example = reader.read(filename_queue)
  features = tf.parse_single_example(
    serialized_example,
    dense_keys=['image_raw', 'label'],
    # Defaults are not specified since both keys are required.
    dense_types=[tf.string, tf.int64])

  # Convert from a scalar string tensor (whose single string has
  image = tf.decode_raw(features['image_raw'], tf.uint8)

  image = tf.reshape(image, [my_cifar.n_input])
  image.set_shape([my_cifar.n_input])

  # OPTIONAL: Could reshape into a 28x28 image and apply distortions
  # here.  Since we are not applying any distortions in this
  # example, and the next step expects the image to be flattened
  # into a vector, we don't bother.

  # Convert from [0, 255] -> [-0.5, 0.5] floats.
  image = tf.cast(image, tf.float32)
  image = tf.cast(image, tf.float32) * (1. / 255) - 0.5

  # Convert label from a scalar uint8 tensor to an int32 scalar.
  label = tf.cast(features['label'], tf.int32)

  return image, label

Note that images will be saved in TFRecord as uncompressed tensors, possibly increasing the size by a factor of about 5. That's wasting storage space, and likely to be rather slow because of the amount of data that needs to be read.

It's far better to just save the filename in the TFRecord, and read the file on demand. The new Dataset API works well, and the documentation has this example:

# Reads an image from a file, decodes it into a dense tensor, and resizes it
# to a fixed shape.
def _parse_function(filename, label):
  image_string = tf.read_file(filename)
  image_decoded = tf.image.decode_jpeg(image_string)
  image_resized = tf.image.resize_images(image_decoded, [28, 28])
  return image_resized, label

# A vector of filenames.
filenames = tf.constant(["/var/data/image1.jpg", "/var/data/image2.jpg", ...])

# `labels[i]` is the label for the image in `filenames[i].
labels = tf.constant([0, 37, ...])

dataset = tf.data.Dataset.from_tensor_slices((filenames, labels))
dataset = dataset.map(_parse_function)

I have same problem, too.

So here is how i get the tfrecords files of my own jpeg files

Edit: add sol 1 - a better & faster way update: Jan/5/2020

(Recommended) Solution 1: TFRecordWriter

See this Tfrecords Guide post

Solution 2:

From tensorflow official github: How to Construct a New Dataset for Retraining, use official python script build_image_data.py directly and bazel is a better idea.

Here is the instruction:

To run build_image_data.py, you can run the following command line:

# location to where to save the TFRecord data.        
OUTPUT_DIRECTORY=$HOME/my-custom-data/

# build the preprocessing script.
bazel build inception/build_image_data

# convert the data.
bazel-bin/inception/build_image_data \
  --train_directory="${TRAIN_DIR}" \
  --validation_directory="${VALIDATION_DIR}" \
  --output_directory="${OUTPUT_DIRECTORY}" \
  --labels_file="${LABELS_FILE}" \
  --train_shards=128 \
  --validation_shards=24 \
  --num_threads=8

where the $OUTPUT_DIRECTORY is the location of the sharded TFRecords. The $LABELS_FILE will be a text file that is read by the script that provides a list of all of the labels.

then, it should do the trick.

ps. bazel, which is made by Google, turn code into makefile.

Solution 3:

First, i reference the instruction by @capitalistpug and check the shell script file

(shell script file providing by Google: download_and_preprocess_flowers.sh)

Second, i also find out a mini inception-v3 training tutorial by NVIDIA

(NVIDIA official SPEED UP TRAINING WITH GPU-ACCELERATED TENSORFLOW)

Be careful, the following steps need to be executed in the Bazel WORKSAPCE enviroment

so Bazel build file can run successfully


First step, I comment out the part of downloading the imagenet data set that i already downloaded

and the rest of the part that i don't need of download_and_preprocess_flowers.sh

Second step, change directory to tensorflow/models/inception

where it is the Bazel environment and it is build by Bazel before

$ cd tensorflow/models/inception 

Optional : If it is not builded before, type in the following code in cmd

$ bazel build inception/download_and_preprocess_flowers 

You need to figure out the content in the following image

enter image description here

And last step, type in the following code:

$ bazel-bin/inception/download_and_preprocess_flowers $Your/own/image/data/path

Then, it will start calling build_image_data.py and creating tfrecords file