Nontrivial background removal

1.This method is from documentation of function ClusterClassify

image = Import["https://i.stack.imgur.com/zP5xF.jpg"];
imageData = Flatten[ImageData[ColorConvert[image, "LAB"]], 1];
c = ClusterClassify[imageData, 4, Method -> "KMedoids"];
decision = c[imageData];
mask = Image /@ 
  ComponentMeasurements[{image, 
     Partition[decision, First@ImageDimensions[image]]}, "Mask"][[All,
     2]]

enter image description here

allMask = FillingTransform[Dilation[ColorNegate[mask[[4]]], 1]];
SetAlphaChannel[image, Blur[allMask, 8]]

enter image description here

2.Based on machine learning

Method one,Classify the pixel by chain a nerve

I have to say this is worthless method in real life,because it is very very very low efficiency(Maybe when you have a CUDA feature GPU, it will be more faster).I don't remember how long I have run it.Well,Just for fun.

First we select a range that you need,which just is a selection roughly that mean you can include some singular point in your trained data.Of course you can make yourself trained data.This is what I select that arbitrarily

Then define a net and train it

image = Import["https://i.stack.imgur.com/zP5xF.jpg"];
trainData = Join[Thread[Rule[no, False]], Thread[Rule[yes, True]]];
net = NetChain[{20, Tanh, 2, 
    SoftmaxLayer["Output" -> NetDecoder[{"Class", {True, False}}]]}, 
   "Input" -> 3];
ringQ = NetTrain[net, trainData, MaxTrainingRounds -> 20]

Be patient and wait some minutes,then you can get your ring.The final effect is depened on your training data and some luck.

Image[Map[If[ringQ[#],#,N@{1,1,1}]&,ImageData[image],{2}]]

We can use my above method to refine it in following step.

Method two,use the built-in function of Classify

This method is not bad as the result effect,but actually I will not tell you this code cost my one night to run,which mean this method is slower than that NetChain. Firstly,make some sample data

match = Classify[<|False -> Catenate[ImageData[no]], 
    True -> Catenate[ImageData[yes]]|>];
ImageApply[If[match[#], #, {1, 1, 1}] &, image]

Be more patient please,after just one night,the result will show you.like this:

3.Above answer for another motivation or just fun,but in this part,I will post some method for image-processing

image = Import["https://i.stack.imgur.com/zP5xF.jpg"];

Method one

SetAlphaChannel[image, 
 Erosion[Blur[
   DeleteSmallComponents[
    FillingTransform[Binarize[GradientFilter[image, 1], 0.035]]], 10],
   1]]

Method two

SetAlphaChannel[image, 
 Blur[Binarize[
   Image[WatershedComponents[GradientFilter[image, 2], 
      Method -> {"MinimumSaliency", 0.2}] - 1]], 5]]

Method three

SetAlphaChannel[image, 
 Blur[FillingTransform[
   MorphologicalBinarize[
    ColorNegate[
     First[ColorSeparate[ColorConvert[image, "CMYK"]]]], {.6, .93}]], 
  7]]

Last but not least,this method do some principal component decomposition of color channels,which can face more situation commonly

First[KarhunenLoeveDecomposition[
  ColorCombine /@ Tuples[ColorSeparate[image], {3}]]]

Note that picture from 2 to 5,every picture have more strong contrast then origin.Than we can use fist three method do next step.


Here's a method that could be iterated and refined to replicate the opencv result, I think.

First we use the ClusterClassify method of yobe then we simply fill in the holes by generating a mask that gets the frame we need and combine this into a single mask.

First the boiler plate:

img = Import["https://i.stack.imgur.com/zP5xF.jpg"];

clusterGet[image_] :=
  Module[{imageData, c, decision},
   imageData = Flatten[ImageData[ColorConvert[image, "LAB"]], 1]; 
   c = ClusterClassify[imageData, 4, Method -> "KMedoids"]; 
   decision = c[imageData]; 
   Image /@ 
    ComponentMeasurements[{image, 
       Partition[decision, First@ImageDimensions[img]]}, "Mask"][[All,
       2]]
   ];

maskCombine[{base_, others__}] :=

  Block[{root = base, 
    alphas = SetAlphaChannel[#, ColorNegate@#] & /@ {others}},
   Do[root = ImageCompose[base, a], {a, alphas}];
   root
   ];

then figure out which mask we want:

baseMask = clusterGet[img][[4]]

base mask

then we need to create a filling mask for that:

fillingMask = Closing[
   EdgeDetect@
    MeanShiftFilter[ImageAdjust[Lighter@img, 2], 1, .01, 
     MaxIterations -> 5],
   4.5];

filling mask

then set the composite mask as the overal image:

SetAlphaChannel[img,
 ColorNegate@maskCombine@{baseMask, fillingMask}]

composite

Using more sophisticated filters I've been able to build a better filling mask that minimizes the amount of lost green space/frame but I can't remember exactly which set of filters I combined. For those looking to extend this, the edge-preserving filters such as PeronaMalikFilter appear to be the place to start. There are tons of filters to apply, so I'm sure trial and error can give you the results you want.

You could also use Java and opencv, doing more or less what Leonid Shifrin does here or write your own simple boundary detection code. I did some of the latter, but it's generally just too slow to be properly workable and figuring out the appropriate pixel distance function is, again, a matter of trial and error.


With helpful ideas of KAI:

img = Import["https://i.stack.imgur.com/zP5xF.jpg"]

mask = FillingTransform@
    DeleteBorderComponents@
     DeleteSmallComponents@
      ColorNegate@
       ContourDetect[#, 0.4]&@
        ImageAdjust @ img

back = MorphologicalPerimeter@
        Dilation[Closing[mask, DiskMatrix[15]], DiskMatrix[2]]

c = Binarize @ Colorize @ GrowCutComponents[img, {mask, back}]

ImageMultiply[img, c] // RemoveBackground // ImageCrop

enter image description here

Not a complete solution, but maybe could serve as a starting point.