Cover Image

A CNN Based Approach for Garments Texture Design Classification

S.M. Sofiqul Islam, Emon Kumar Dey, Md. Nurul Ahad Tawhid, B. M. Mainul Hossain

Abstract


Identifying garments texture design automatically for recommending the fashion trends is important nowadays because of the rapid growth of online shopping. By learning the properties of images efficiently, a machine can give better accuracy of classification. Several Hand-Engineered feature coding exists for identifying garments design classes. Recently, Deep Convolutional Neural Networks (CNNs) have shown better performances for different object recognition. Deep CNN uses multiple levels of representation and abstraction that helps a machine to understand the types of data more accurately. In this paper, a CNN model for identifying garments design classes has been proposed. Experimental results on two different datasets show better results than existing two well-known CNN models (AlexNet and VGGNet) and some state-of-the-art Hand-Engineered feature extraction methods.


Keywords


CNN; Deep Learning, AlexNet, VGGNet, Texture De-scriptor, Garment Categories, Garment Trend Identifica-tion, Design Classification for Garments

Full Text:

PDF

References


J. Wu and J. M. Rehg, “Centrist: a visual descriptor for scene categorization,” IEEE transactions on pattern analysis and machine intelligence, vol. 33, no. 8, pp. 1489-1501, August 2011.

T. Ojala, M. Pietikainen and T. Maenpaa, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” IEEE Transactions on pattern analysis and machine intelligence, vol. 24, no. 7, pp. 971-987, 2002.

O. L. Junior, D. Delgado, V. Gonçalves and U. Nunes, “Trainable classifier-fusion schemes: an application to pedestrian detection,” Proc. 12th International IEEE Conference on Intelligent Transportation Systems (ITCS 09), IEEE Press, pp. 1-6, 2009.

T. X. Yang and B. Triggs, “Enhanced local texture feature sets for face recognition under difficult lighting conditions,” IEEE transactions on image processing, vol. 19, no. 6, pp. 1635-1650, 2010.

Z. Guo, L. Zhang and D. Zhang, “A completed modeling of local binary pattern operator for texture classification,” IEEE Transactions on Image Processing, vol. 19, no. 6, pp. 1657-1663, 2010.

E. K. Dey, M. N. A. Tawhid and M. Shoyaib, “An automated system for garment texture design class identification,” Computers, vol. 4, no. 3, pp. 265-282, 2015.

A. Krizhevsky, I. Sutskever and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Proc. Advances in neural information processing systems, pp. 1097-1105, 2012.

B. Zhou, A. Lapedriza, J. Xiao, A. Torralba and A. Oliva, “Learning deep features for scene recognition using places database,” Proc. Advances in neural information processing systems, pp. 487-495, 2014.

K. Chatfield, K. Simonyan, A. Vedaldi and A. Zisserman, “Return of the devil in the details: delving deep into convolutional nets,” arXiv preprint arXiv: 1405.3531, 2014.

P. Heit, “The berkeley model,” Health education, vol. 8, no. 1, pp. 2-3, 1977.

L. Wang, C. Y. Lee, Z. Tu and S. Lazebnik, “Training deeper convolutional networks with deep supervision,” arXiv preprint arXiv: 1505.02496, 2015.

G. Levi and T. Hassner, “Age and gender classification using convolutional neural networks,” Proc. IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 34-42, 2015.

Z. Ge, C. McCool and P. Corke, “Content specific feature learning for fine-grained plant classification,” Proc. CLEF (Working notes), 2015.

K. Yamaguchi, M. H. Kiapour, L. E. Ortiz and T. L. Berg, “Parsing clothing in fashion photographs,” Proc. Computer Vision and Pattern Recognition (CVPR), pp. 3570-3577, 2012.

K. Yamaguchi, M. H. Kiapour and T. L. Berg, “Paper doll parsing: retrieving similar styles to parse clothing items,” Proc. of the IEEE International Conference on Computer Vision, pp. 3519-3526, 2013.

C. Shan, S. Gong and P. W. McOwan, “Facial expression recognition based on local binary patterns: a comprehensive study,” Image and Vision Computing, vol. 27, no. 6, pp. 803-816, 2009.

X. Feng, A. Hadid and M. Pietikäinen, “A coarse-to-fine classification scheme for facial expression recognition,” Proc. International Conference Image Analysis and Recognition, Springer Berlin Heidelberg, pp. 668-675, 2004.

E. Simo-Serra, S. Fidler, F. Moreno-Noguer and R. Urtasun, “A high performance CRF model for clothes parsing,” Proc. Asian conference on computer vision, Springer International Publishing, pp. 64-81, 2014.

S. Vittayakorn, K. Yamaguchi, A. C. Berg and T. L. Berg, “Runway to realway: visual analysis of fashion,” Proc. IEEE Winter Conference on Applications of Computer Vision, IEEE Press, pp. 951-958, 2015.

Y. Kalantidis, L. Kennedy and L. J. Li, “Getting the look: clothing recognition and segmentation for automatic product suggestions in everyday photos,” Proc. of the 3rd ACM conference on International conference on multimedia retrieval, ACM, pp. 105-112, 2013.

A. C. Gallagher and T. Chen, “Clothing cosegmentation for recognizing people,” Proc. Computer Vision and Pattern Recognition (CVPR 2008), IEEE Press, pp. 1-8, 2008.

L. Bourdev, S. Maji and J. Malik, “Describing people: a poselet-based approach to attribute classification,” Proc. International Conference on Computer Vision, IEEE Press, pp. 1543-1550, 2011.

S. Arivazhagan and L. Ganesan, “Texture classification using wavelet transform,” Pattern recognition letters, vol. 24, no. 9, pp. 1513-1521, 2003.

M. M. Rahman, S. Rahman, M. Kamal, E. K. Dey, M. A. A. Wadud and M. Shoyaib, “Noise adaptive binary pattern for face image analysis,” Proc. 18th International Conference on Computer and Information Technology (ICCIT), IEEE Press, pp. 390-395, 2015.

B. Jun, I. Choi and D. Kim, “Local transform features and hybridization for accurate face and human detection,” IEEE transactions on pattern analysis and machine intelligence, vol. 35, no. 6, pp. 1423-1436, 2013.

S. Lazebnik, C. Schmid and J. Ponce, “Beyond bags of features: spatial pyramid matching for recognizing natural scene categories,” Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06), IEEE Press, pp. 2169-2178, 2006.

B. Lao and K. Jagadeesh, “Convolutional neural networks for fashion classification and object detection,” http://cs231n.stanford.edu/reports/BLAO_KJAG_CS231N_FinalPaperFashionClassification.pdf, June 26, 2016.

F. Hu, G. S. Xia, J. Hu and L. Zhang, “Transferring deep convolutional neural networks for the scene classification of high-resolution remote sensing imagery,” Remote Sensing, vol. 7, no. 11, pp. 14680-14707, 2015.

Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama and T. Darrell, “Caffe: convolutional architecture for fast feature embedding,” Proc. 22nd ACM international conference on Multimedia, ACM, pp. 675-678, 2014.

M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” Proc. European Conference on Computer Vision, Springer International Publishing, pp. 818-833, 2014.

M. Manfredi, C. Grana, S. Calderara and R. Cucchiara, “A complete system for garment segmentation and color classification,” Machine Vision and Applications, vol. 25, no. 4, pp. 955-969, 2014.

H. Chen, A. Gallagher and B. Girod, “Describing clothing by semantic attributes,” Proc. European Conference on Computer Vision, Springer Berlin Heidelberg, pp. 609-623, 2012.

M. Liu, S. Li, S. Shan and X. Chen, “Au-aware deep networks for facial expression recognition,” Proc. 10th Automatic Face and Gesture Recognition (FG), IEEE Press, pp. 1-6, 2013.


Refbacks

  • There are currently no refbacks.


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

www.reliablecounter.com

audio recording mixing