摘要

COCO的 全称是Common Objects in COntext,是微软团队提供的一个可以用来进行图像识别的数据集。MS COCO数据集中的图像分为训练、验证和测试集。COCO通过在Flickr上搜索80个对象类别和各种场景类型来收集图像,其使用了亚马逊的Mechanical Turk(AMT)。

COCO数据集现在有3种标注类型:object instances(目标实例), object keypoints(目标上的关键点), and image captions(看图说话)。本文着重介绍object instances。

Object Instance 类型的标注格式

1、整体JSON文件格式

Object Instance这种格式的文件从头至尾按照顺序分为以下段落:

{

    "info": info,

    "licenses": [license],

    "images": [image],

    "annotations": [annotation],

    "categories": [category]

}

2、annotations字段

annotations字段是包含多个annotation实例的一个数组,annotation类型本身又包含了一系列的字段,如这个目标的category id和segmentation mask。segmentation格式取决于这个实例是一个单个的对象(即iscrowd=0,将使用polygons格式)还是一组对象(即iscrowd=1,将使用RLE格式)。bbox是存放的物体标注信息,与VOC格式不同,COCO里面存储的格式是[左上角x坐标,左上角y坐标,物体的宽,物体的长],这点需要注意。如下所示:

annotation{

    "id": int,

    "image_id": int,

    "category_id": int,

    "segmentation": RLE or [polygon],

    "area": float,

    "bbox": [x,y,width,height],

    "iscrowd": 0 or 1,

}

注意,单个的对象(iscrowd=0)可能需要多个polygon来表示,比如这个对象在图像中被挡住了。而iscrowd=1时(将标注一组对象,比如一群人)的segmentation使用的就是RLE格式。

另外,每个对象(不管是iscrowd=0还是iscrowd=1)都会有一个矩形框bbox ,矩形框左上角的坐标和矩形框的长宽会以数组的形式提供,数组第一个元素就是左上角的横坐标值。

area是area of encoded masks。

最后,annotation结构中的categories字段存储的是当前对象所属的category的id,以及所属的supercategory的name。

下面是从instances_val2017.json文件中摘出的一个annotation的实例:

{

         "segmentation": [[510.66,423.01,511.72,420.03,510.45,416.0,510.34,413.02,510.77,410.26,\

                            510.77,407.5,510.34,405.16,511.51,402.83,511.41,400.49,510.24,398.16,509.39,\

                            397.31,504.61,399.22,502.17,399.64,500.89,401.66,500.47,402.08,499.09,401.87,\

                            495.79,401.98,490.59,401.77,488.79,401.77,485.39,398.58,483.9,397.31,481.56,\

                            396.35,478.48,395.93,476.68,396.03,475.4,396.77,473.92,398.79,473.28,399.96,\

                            473.49,401.87,474.56,403.47,473.07,405.59,473.39,407.71,476.68,409.41,479.23,\

                            409.73,481.56,410.69,480.4,411.85,481.35,414.93,479.86,418.65,477.32,420.03,\

                            476.04,422.58,479.02,422.58,480.29,423.01,483.79,419.93,486.66,416.21,490.06,\

                            415.57,492.18,416.85,491.65,420.24,492.82,422.9,493.56,424.39,496.43,424.6,\

                            498.02,423.01,498.13,421.31,497.07,420.03,497.07,415.15,496.33,414.51,501.1,\

                            411.96,502.06,411.32,503.02,415.04,503.33,418.12,501.1,420.24,498.98,421.63,\

                            500.47,424.39,505.03,423.32,506.2,421.31,507.69,419.5,506.31,423.32,510.03,\

                            423.01,510.45,423.01]],

         "area": 702.1057499999998,

         "iscrowd": 0,

         "image_id": 289343,

         "bbox": [473.07,395.93,38.65,28.67],

         "category_id": 18,

         "id": 1768

},

3、categories字段

categories是一个包含多个category实例的数组,而category结构体描述如下:

{

    "id": int,

    "name": str,

    "supercategory": str,

}

从instances_val2017.json文件中摘出的2个category实例如下所示:

{

         "supercategory": "person",

         "id": 1,

         "name": "person"

},

{

         "supercategory": "vehicle",

         "id": 2,

         "name": "bicycle"

},

Labelme转COCO的代码:

# -*- coding:utf-8 -*-
# !/usr/bin/env python

import json
from labelme import utils
import numpy as np
import glob
import PIL.Image
labels={'一次性快餐盒':1,'书籍纸张':2,'充电宝':3,'剩饭剩菜':4,'包':5,
           '垃圾桶':6,'塑料器皿':7,'塑料玩具':8,'塑料衣架':9,'大骨头':10,'干电池':11,
           '快递纸袋':12,'插头电线':13,'旧衣服':14,'易拉罐':15,'枕头':16,'果皮果肉':17,'毛绒玩具':18,
           '污损塑料':19,'污损用纸':20,'洗护用品':21,'烟蒂':22,'牙签':23,'玻璃器皿':24,'砧板':25,
           '筷子':26,'纸盒纸箱':27,'花盆':28,'茶叶渣':29,'菜帮菜叶':30,'蛋壳':31,'调料瓶':32,
           '软膏':33,'过期药物':34,'酒瓶':35,'金属厨具':36,'金属器皿':37,'金属食品罐':38,'锅':39,
           '陶瓷器皿':40,'鞋':41,'食用油桶':42,'饮料瓶':43,'鱼骨':44}
class MyEncoder(json.JSONEncoder):
    def default(self, obj):
        if isinstance(obj, np.integer):
            return int(obj)
        elif isinstance(obj, np.floating):
            return float(obj)
        elif isinstance(obj, np.ndarray):
            return obj.tolist()
        else:
            return super(MyEncoder, self).default(obj)


class labelme2coco(object):
    def __init__(self, labelme_json=[], save_json_path='./tran.json'):
        '''
        :param labelme_json: 所有labelme的json文件路径组成的列表
        :param save_json_path: json保存位置
        '''
        self.labelme_json = labelme_json
        self.save_json_path = save_json_path
        self.images = []
        self.categories = []
        self.annotations = []
        # self.data_coco = {}
        self.label = []
        self.annID = 1
        self.height = 0
        self.width = 0

        self.save_json()

    def data_transfer(self):

        for num, json_file in enumerate(self.labelme_json):
            imagePath=json_file.split('.')[0]+'.jpg'
            imageName=imagePath.split('\\')[-1]
            print(imageName)
            with open(json_file, 'r') as fp:
                data = json.load(fp)  # 加载json文件
                self.images.append(self.image(data, num,imageName))
                for shapes in data['shapes']:
                    label = shapes['label']
                    if label not in self.label:
                        self.categories.append(self.categorie(label))
                        self.label.append(label)
                    points = shapes['points']  # 这里的point是用rectangle标注得到的,只有两个点,需要转成四个点
                    # points.append([points[0][0],points[1][1]])
                    # points.append([points[1][0],points[0][1]])
                    self.annotations.append(self.annotation(points, label, num))
                    self.annID += 1

    def image(self, data, num,imagePath):
        image = {}
        img = utils.img_b64_to_arr(data['imageData'])  # 解析原图片数据
        # img=io.imread(data['imagePath']) # 通过图片路径打开图片
        # img = cv2.imread(data['imagePath'], 0)
        height, width = img.shape[:2]
        img = None
        image['height'] = height
        image['width'] = width
        image['id'] = num + 1
        # image['file_name'] = data['imagePath'].split('/')[-1]
        image['file_name'] = imagePath
        self.height = height
        self.width = width

        return image

    def categorie(self, label):
        categorie = {}
        categorie['supercategory'] = 'Cancer'
        categorie['id'] = labels[label]  # 0 默认为背景
        categorie['name'] = label
        return categorie

    def annotation(self, points, label, num):
        annotation = {}
        annotation['segmentation'] = [list(np.asarray(points).flatten())]
        annotation['iscrowd'] = 0
        annotation['image_id'] = num + 1
        # annotation['bbox'] = str(self.getbbox(points)) # 使用list保存json文件时报错(不知道为什么)
        # list(map(int,a[1:-1].split(','))) a=annotation['bbox'] 使用该方式转成list
        annotation['bbox'] = list(map(float, self.getbbox(points)))
        annotation['area'] = annotation['bbox'][2] * annotation['bbox'][3]
        # annotation['category_id'] = self.getcatid(label)
        annotation['category_id'] = self.getcatid(label)  # 注意,源代码默认为1
        print(label,annotation['category_id'])
        annotation['id'] = self.annID
        return annotation

    def getcatid(self, label):
        for categorie in self.categories:
            if label == categorie['name']:
                return categorie['id']
        return 1

    def getbbox(self, points):
        # img = np.zeros([self.height,self.width],np.uint8)
        # cv2.polylines(img, [np.asarray(points)], True, 1, lineType=cv2.LINE_AA)  # 画边界线
        # cv2.fillPoly(img, [np.asarray(points)], 1)  # 画多边形 内部像素值为1
        polygons = points

        mask = self.polygons_to_mask([self.height, self.width], polygons)
        return self.mask2box(mask)

    def mask2box(self, mask):
        '''从mask反算出其边框
        mask:[h,w]  0、1组成的图片
        1对应对象,只需计算1对应的行列号(左上角行列号,右下角行列号,就可以算出其边框)
        '''
        # np.where(mask==1)
        index = np.argwhere(mask == 1)
        rows = index[:, 0]
        clos = index[:, 1]
        # 解析左上角行列号
        left_top_r = np.min(rows)  # y
        left_top_c = np.min(clos)  # x

        # 解析右下角行列号
        right_bottom_r = np.max(rows)
        right_bottom_c = np.max(clos)

        # return [(left_top_r,left_top_c),(right_bottom_r,right_bottom_c)]
        # return [(left_top_c, left_top_r), (right_bottom_c, right_bottom_r)]
        # return [left_top_c, left_top_r, right_bottom_c, right_bottom_r]  # [x1,y1,x2,y2]
        return [left_top_c, left_top_r, right_bottom_c - left_top_c,
                right_bottom_r - left_top_r]  # [x1,y1,w,h] 对应COCO的bbox格式

    def polygons_to_mask(self, img_shape, polygons):
        mask = np.zeros(img_shape, dtype=np.uint8)
        mask = PIL.Image.fromarray(mask)
        xy = list(map(tuple, polygons))
        PIL.ImageDraw.Draw(mask).polygon(xy=xy, outline=1, fill=1)
        mask = np.array(mask, dtype=bool)
        return mask

    def data2coco(self):
        data_coco = {}
        data_coco['images'] = self.images
        data_coco['categories'] = self.categories
        data_coco['annotations'] = self.annotations
        return data_coco

    def save_json(self):
        self.data_transfer()
        self.data_coco = self.data2coco()
        # 保存json文件
        json.dump(self.data_coco, open(self.save_json_path, 'w'), indent=4, cls=MyEncoder)  # indent=4 更加美观显示


labelme_json = glob.glob('D:/HWLabelme/*.json')
from sklearn.model_selection import train_test_split
trainval_files, test_files = train_test_split(labelme_json, test_size=0.2, random_state=55)

labelme2coco(trainval_files, 'instances_train2017.json')
labelme2coco(test_files, 'instances_val2017.json')


转载自:https://wanghao.blog.csdn.net/article/details/106255087