当前位置: 首页 > wzjs >正文

商务网站的主要内容哈尔滨优化推广公司

商务网站的主要内容,哈尔滨优化推广公司,企业网站建设方案文档,企业质量文化建设最近有个项目需要做视觉自动化处理的工具,最后选用的软件为python,刚好这个机会进行系统学习。短时间学习,需要快速开发,所以记录要点步骤,防止忘记。 链接: 开源 python 应用 开发(一&#xf…

 最近有个项目需要做视觉自动化处理的工具,最后选用的软件为python,刚好这个机会进行系统学习。短时间学习,需要快速开发,所以记录要点步骤,防止忘记。

 链接:

开源 python 应用 开发(一)python、pip、pyAutogui、python opencv安装-CSDN博客

开源 python 应用 开发(二)基于pyautogui、open cv 视觉识别的工具自动化-CSDN博客

开源 python 应用 开发(三)python语法介绍-CSDN博客

开源 python 应用 开发(四)python文件和系统综合应用-CSDN博客

 推荐链接:

开源 Arkts 鸿蒙应用 开发(一)工程文件分析-CSDN博客

开源 Arkts 鸿蒙应用 开发(二)封装库.har制作和应用-CSDN博客

开源 Arkts 鸿蒙应用 开发(三)Arkts的介绍-CSDN博客

开源 Arkts 鸿蒙应用 开发(四)布局和常用控件-CSDN博客

开源 Arkts 鸿蒙应用 开发(五)控件组成和复杂控件-CSDN博客

 推荐链接:

开源 java android app 开发(一)开发环境的搭建-CSDN博客

开源 java android app 开发(二)工程文件结构-CSDN博客

开源 java android app 开发(三)GUI界面布局和常用组件-CSDN博客

开源 java android app 开发(四)GUI界面重要组件-CSDN博客

开源 java android app 开发(五)文件和数据库存储-CSDN博客

开源 java android app 开发(六)多媒体使用-CSDN博客

开源 java android app 开发(七)通讯之Tcp和Http-CSDN博客

开源 java android app 开发(八)通讯之Mqtt和Ble-CSDN博客

开源 java android app 开发(九)后台之线程和服务-CSDN博客

开源 java android app 开发(十)广播机制-CSDN博客

开源 java android app 开发(十一)调试、发布-CSDN博客

开源 java android app 开发(十二)封库.aar-CSDN博客

推荐链接:

开源C# .net mvc 开发(一)WEB搭建_c#部署web程序-CSDN博客

开源 C# .net mvc 开发(二)网站快速搭建_c#网站开发-CSDN博客

开源 C# .net mvc 开发(三)WEB内外网访问(VS发布、IIS配置网站、花生壳外网穿刺访问)_c# mvc 域名下不可訪問內網,內網下可以訪問域名-CSDN博客

开源 C# .net mvc 开发(四)工程结构、页面提交以及显示_c#工程结构-CSDN博客

开源 C# .net mvc 开发(五)常用代码快速开发_c# mvc开发-CSDN博客

本章节内容如下:实现了使用 YOLOv3 (You Only Look Once version 3) 深度学习模型进行目标检测的功能。识别了香蕉、手机、夜景中的汽车和行人,第一次玩感觉挺有意思的。对图片有些要求自己更换图片,最好选高清大图。

1.  YOLOv3 模型

2.  主要函数

3.  所有代码

4.  效果演示

一、YOLOv3 模型

使用预训练的 YOLOv3 模型(需要 yolov3.cfg 和 yolov3.weights 文件)

基于 COCO 数据集(80 个类别)

这个代码需要3个文件,分别是yolov3.cfg、yolov3.weights、coco.names网上很容易能搜到。

更换对应的 .cfg文件和.weights文件

链接:YOLO: Real-Time Object Detection

yolov3.cfg文件

[net]
# Testing
# batch=1
# subdivisions=1
# Training
batch=64
subdivisions=16
width=608
height=608
channels=3
momentum=0.9
decay=0.0005
angle=0
saturation = 1.5
exposure = 1.5
hue=.1learning_rate=0.001
burn_in=1000
max_batches = 500200
policy=steps
steps=400000,450000
scales=.1,.1[convolutional]
batch_normalize=1
filters=32
size=3
stride=1
pad=1
activation=leaky# Downsample[convolutional]
batch_normalize=1
filters=64
size=3
stride=2
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=32
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=64
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear# Downsample[convolutional]
batch_normalize=1
filters=128
size=3
stride=2
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear# Downsample[convolutional]
batch_normalize=1
filters=256
size=3
stride=2
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear# Downsample[convolutional]
batch_normalize=1
filters=512
size=3
stride=2
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear# Downsample[convolutional]
batch_normalize=1
filters=1024
size=3
stride=2
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=1024
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=1024
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=1024
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=1024
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear######################[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky[convolutional]
size=1
stride=1
pad=1
filters=255
activation=linear[yolo]
mask = 6,7,8
anchors = 10,13,  16,30,  33,23,  30,61,  62,45,  59,119,  116,90,  156,198,  373,326
classes=80
num=9
jitter=.3
ignore_thresh = .7
truth_thresh = 1
random=1[route]
layers = -4[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky[upsample]
stride=2[route]
layers = -1, 61[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=512
activation=leaky[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=512
activation=leaky[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=512
activation=leaky[convolutional]
size=1
stride=1
pad=1
filters=255
activation=linear[yolo]
mask = 3,4,5
anchors = 10,13,  16,30,  33,23,  30,61,  62,45,  59,119,  116,90,  156,198,  373,326
classes=80
num=9
jitter=.3
ignore_thresh = .7
truth_thresh = 1
random=1[route]
layers = -4[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky[upsample]
stride=2[route]
layers = -1, 36[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=256
activation=leaky[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=256
activation=leaky[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=256
activation=leaky[convolutional]
size=1
stride=1
pad=1
filters=255
activation=linear[yolo]
mask = 0,1,2
anchors = 10,13,  16,30,  33,23,  30,61,  62,45,  59,119,  116,90,  156,198,  373,326
classes=80
num=9
jitter=.3
ignore_thresh = .7
truth_thresh = 1
random=1

coco.names文件

person
bicycle
car
motorbike
aeroplane
bus
train
truck
boat
traffic light
fire hydrant
stop sign
parking meter
bench
bird
cat
dog
horse
sheep
cow
elephant
bear
zebra
giraffe
backpack
umbrella
handbag
tie
suitcase
frisbee
skis
snowboard
sports ball
kite
baseball bat
baseball glove
skateboard
surfboard
tennis racket
bottle
wine glass
cup
fork
knife
spoon
bowl
banana
apple
sandwich
orange
broccoli
carrot
hot dog
pizza
donut
cake
chair
sofa
pottedplant
bed
diningtable
toilet
tvmonitor
laptop
mouse
remote
keyboard
cell phone
microwave
oven
toaster
sink
refrigerator
book
clock
vase
scissors
teddy bear
hair drier
toothbrush

二、主要函数

2.1  load_yolo() 函数

功能:加载 YOLOv3 模型和相关文件

def load_yolo():# 获取当前目录current_dir = os.path.dirname(os.path.abspath(__file__))# 构建完整路径cfg_path = os.path.join(current_dir, "yolov3.cfg")weights_path = os.path.join(current_dir, "yolov3.weights")names_path = os.path.join(current_dir, "coco.names")# 检查文件是否存在if not all(os.path.exists(f) for f in [cfg_path, weights_path, names_path]):raise FileNotFoundError("缺少YOLO模型文件,请确保yolov3.cfg、yolov3.weights和coco.names在脚本目录下")# 加载网络net = cv2.dnn.readNet(weights_path, cfg_path)with open(names_path, "r") as f:classes = [line.strip() for line in f.readlines()]layer_names = net.getLayerNames()output_layers = [layer_names[i - 1] for i in net.getUnconnectedOutLayers()]return net, classes, output_layers

2.2  detect_objects(img, net, output_layers) 函数

功能:对输入图像进行目标检测

def detect_objects(img, net, output_layers):# 从图像创建blobblob = cv2.dnn.blobFromImage(img, scalefactor=1/255.0, size=(416, 416), swapRB=True, crop=False)# 设置输入并进行前向传播net.setInput(blob)outputs = net.forward(output_layers)return outputs

2.3  get_box_dimensions(outputs, height, width) 函数

功能:处理网络输出,提取边界框信息

def get_box_dimensions(outputs, height, width):boxes = []confidences = []class_ids = []for output in outputs:for detection in output:scores = detection[5:]class_id = np.argmax(scores)confidence = scores[class_id]if confidence > 0.5:  # 置信度阈值# 计算边界框坐标center_x = int(detection[0] * width)center_y = int(detection[1] * height)w = int(detection[2] * width)h = int(detection[3] * height)# 矩形左上角坐标x = int(center_x - w / 2)y = int(center_y - h / 2)boxes.append([x, y, w, h])confidences.append(float(confidence))class_ids.append(class_id)return boxes, confidences, class_ids

2.4   draw_labels(boxes, confidences, class_ids, classes, img) 函数

功能:在图像上绘制检测结果

def draw_labels(boxes, confidences, class_ids, classes, img):# 应用非极大值抑制indexes = cv2.dnn.NMSBoxes(boxes, confidences, 0.5, 0.4)# 设置颜色colors = np.random.uniform(0, 255, size=(len(classes), 3))if len(indexes) > 0:for i in indexes.flatten():x, y, w, h = boxes[i]label = str(classes[class_ids[i]])confidence = str(round(confidences[i], 2))color = colors[class_ids[i]]# 绘制边界框和标签cv2.rectangle(img, (x, y), (x + w, y + h), color, 2)cv2.putText(img, f"{label} {confidence}", (x, y - 5), cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 2)return img

2.5  object_detection(image_path) 函数

功能:主函数,整合整个检测流程

def object_detection(image_path):try:net, classes, output_layers = load_yolo()img = cv2.imread(image_path)if img is None:raise FileNotFoundError(f"无法加载图像: {image_path}")height, width = img.shape[:2]outputs = detect_objects(img, net, output_layers)boxes, confidences, class_ids = get_box_dimensions(outputs, height, width)img = draw_labels(boxes, confidences, class_ids, classes, img)cv2.imshow("Object Detection", img)cv2.waitKey(0)cv2.destroyAllWindows()except Exception as e:print(f"发生错误: {str(e)}")

三、所有代码

import cv2
import numpy as np
import osdef load_yolo():# 获取当前目录current_dir = os.path.dirname(os.path.abspath(__file__))# 构建完整路径cfg_path = os.path.join(current_dir, "yolov3.cfg")weights_path = os.path.join(current_dir, "yolov3.weights")names_path = os.path.join(current_dir, "coco.names")# 检查文件是否存在if not all(os.path.exists(f) for f in [cfg_path, weights_path, names_path]):raise FileNotFoundError("缺少YOLO模型文件,请确保yolov3.cfg、yolov3.weights和coco.names在脚本目录下")# 加载网络net = cv2.dnn.readNet(weights_path, cfg_path)with open(names_path, "r") as f:classes = [line.strip() for line in f.readlines()]layer_names = net.getLayerNames()output_layers = [layer_names[i - 1] for i in net.getUnconnectedOutLayers()]return net, classes, output_layersdef detect_objects(img, net, output_layers):# 从图像创建blobblob = cv2.dnn.blobFromImage(img, scalefactor=1/255.0, size=(416, 416), swapRB=True, crop=False)# 设置输入并进行前向传播net.setInput(blob)outputs = net.forward(output_layers)return outputsdef get_box_dimensions(outputs, height, width):boxes = []confidences = []class_ids = []for output in outputs:for detection in output:scores = detection[5:]class_id = np.argmax(scores)confidence = scores[class_id]if confidence > 0.5:  # 置信度阈值# 计算边界框坐标center_x = int(detection[0] * width)center_y = int(detection[1] * height)w = int(detection[2] * width)h = int(detection[3] * height)# 矩形左上角坐标x = int(center_x - w / 2)y = int(center_y - h / 2)boxes.append([x, y, w, h])confidences.append(float(confidence))class_ids.append(class_id)return boxes, confidences, class_idsdef draw_labels(boxes, confidences, class_ids, classes, img):# 应用非极大值抑制indexes = cv2.dnn.NMSBoxes(boxes, confidences, 0.5, 0.4)# 设置颜色colors = np.random.uniform(0, 255, size=(len(classes), 3))if len(indexes) > 0:for i in indexes.flatten():x, y, w, h = boxes[i]label = str(classes[class_ids[i]])confidence = str(round(confidences[i], 2))color = colors[class_ids[i]]# 绘制边界框和标签cv2.rectangle(img, (x, y), (x + w, y + h), color, 2)cv2.putText(img, f"{label} {confidence}", (x, y - 5), cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 2)return imgdef object_detection(image_path):try:net, classes, output_layers = load_yolo()img = cv2.imread(image_path)if img is None:raise FileNotFoundError(f"无法加载图像: {image_path}")height, width = img.shape[:2]outputs = detect_objects(img, net, output_layers)boxes, confidences, class_ids = get_box_dimensions(outputs, height, width)img = draw_labels(boxes, confidences, class_ids, classes, img)cv2.imshow("Object Detection", img)cv2.waitKey(0)cv2.destroyAllWindows()except Exception as e:print(f"发生错误: {str(e)}")if __name__ == "__main__":# 使用示例image_path = "myimg.jpg"  # 替换为你的图片路径object_detection(image_path)

四、效果演示

4.1  识别香蕉

4.2  识别手机

4.3  识别夜景中的汽车和人


文章转载自:

http://0igxegPp.wgzgr.cn
http://nA64Kiwa.wgzgr.cn
http://O1m6BNEO.wgzgr.cn
http://sqtcP3TS.wgzgr.cn
http://aun8vlvZ.wgzgr.cn
http://obtFHeuV.wgzgr.cn
http://9Y9k1CbG.wgzgr.cn
http://97LZEz86.wgzgr.cn
http://U3RoWfyl.wgzgr.cn
http://Ft5mE7CJ.wgzgr.cn
http://e1M9wjEe.wgzgr.cn
http://z5Vr5NlX.wgzgr.cn
http://7GTG9csf.wgzgr.cn
http://Ewsqp8IR.wgzgr.cn
http://jaei12xX.wgzgr.cn
http://n8OIgTT8.wgzgr.cn
http://2j6HG6YM.wgzgr.cn
http://92BXgaXP.wgzgr.cn
http://pGmKKsJq.wgzgr.cn
http://erKaCxXb.wgzgr.cn
http://QTK0BW8R.wgzgr.cn
http://dPUUcnG1.wgzgr.cn
http://1x4hTKSI.wgzgr.cn
http://SYHHNz3m.wgzgr.cn
http://BwndS1ST.wgzgr.cn
http://9AKSZMxB.wgzgr.cn
http://SeqNPrqb.wgzgr.cn
http://bT1VVSwy.wgzgr.cn
http://bZRcrojS.wgzgr.cn
http://ldisiZ1A.wgzgr.cn
http://www.dtcms.com/wzjs/652971.html

相关文章:

  • 贵阳市做网站的公司有哪些站长统计
  • 北京网站域名备案跨越网站建设科技有限公司
  • 公司网站建设怎么规划比较好产品包装设计模板
  • 建设淘宝网站的人员组织网站设计影响seo的因素
  • 做网站搭建需要什么人注册一个公司需要什么资料
  • 网站如何重新备案想学装修设计怎么入门
  • 网页游戏网站官网网站布局 下载
  • 威海市高区建设局网站北京市城乡住房建设部网站
  • 免费自助建站模板自己建网站还是淘宝
  • 做网站比较好的软件长春网站建设q.479185700惠
  • 武威做网站的湛江麻章区
  • 松江附近做网站网站招聘栏怎么做
  • 用易语言做刷网站注册软件电子商务企业网站制作
  • 如何增加网站流量网站开发用什么软件编程
  • 小企业网站建设有什么用wordpress newsroom
  • 网站建设入什么科目北京网站 百度快照
  • 电子业网站建设刘洋网站建设 够完美
  • 可以兼职做设计的网站做一家视频网站
  • 称心的赣州网站建设搜索引擎广告的优缺点
  • 网站建设怎么翻译网站建设 中企动力中山
  • 网站导航栏内容上海百度做网站
  • 中国建设银行网站首页河西网点wordpress客户端连接数据库
  • 跟我学做纸艺花网站个人网页设计作品排版
  • 建工社网校官网给你一个网站怎么优化
  • 做喜报的网站昆明网站设计价格
  • 排名查询系统搜索引擎关键词排名优化
  • wordpress站点如何添加百度分享代码黑马程序员学费
  • 长沙网站seo推广做外贸用哪些网站
  • 在线服务器网站推广宝
  • 免费行情软件网站大全入口全球域名最贵的100个域名