iIf you look at amazon, the top selling 7 out of 10 books are those coloring books for relaxation.
I wanted to see if it was possible to use machine learning to generate automated coloring books. I ended up testing out many different techniques and combining several machine learning technique. I tested noise removal, object detection, image quantizing, deep learning object and region detection, and more. First test was just to use straight up style transfer. I trained 20 about different versions of styles using different patterns of black and white images. I did not do much hyper parameter tuning. I think the images are pretty good, I think it can take you 95% of the way to generating pictures that As you can see from the results, some images look good, but other images have too many lines or too little lines, making the images not suitable for coloring. Next step was to try noise removal. I used opencv to try and remove large blobs of color. Basically the experiment was is it easy to remove large patches of sky or water, areas on the image that had too many uniform patterns. I then took those images and put them through style transfer. The results look ..... Since these models are just working with rax pixels, they dont really know what is going on in the image. Could we make the image processing much smarter? I wanted to see if I could use semantic image segmentation. There is a paper called Fully Convolutional Networks for Semantic Segmentation that explains how to do that. Fortunately there are a few pretrained models using caffe, so I took that and tried my test images on that . The model gives you back as mask file with the number for the class the segment represents. I found that it still messed up on a lot of different. Ideally I would retrain my own network on a better dataset, but then that would take too much time to collect a better dataset that would work for the images I have. So then I wanted to see how pre deep learning segmentation worked for segmentation. Once again, I turned to Opencv. My hypothesis was that it would provide worse, but I wanted to be sure. Opencv provides a few methods for face detection like Haar Cascades. I used this I also tested OpenCV's point of interest detector My final pipeline did several steps of processing. I trained several different style transfer images on black and white abstract patterns, then I would use a vision algorithm to find the "interesting" regions of the images using opencv. With those regions I used scipy to merge the original image, a black and white version of the image, and a stylized version together to generate a new super image.The results were came out excellent for some images, but I was not able to get the process to be 100% automated. Until these algorithms are aware of the actual content in the image, they will alwys seem a little off compared to a human hand made version. I do think that this technique could be improved though and even now could be used to accelerate a human's productivity with this kind of image processing
This was fun to play with , as I said earlier, I believe this is 85% of the way there, but its always that last few percent where most of the hard work goes into. im sure there is a way to make this work better in a 100% automated way. One idea is to do another step of post processing that takes the open ended lines, and tries to close them in essence creating more areas to color in. You can play with some of the models I built here: