当前位置: 首页 > news >正文

【OpenCV】网络模型推理的简单流程分析(readNetFromONNX、setInput和forward等)

目录

  • 1.模型读取(readNetFromONNX())
    • 1.1 初始化解析函数(parseOperatorSet())
    • 1.2 提取张量(getGraphTensors())
    • 1.3 节点处理(handleNode())
  • 2.数据准备(blobFromImage() & setInput())
  • 3.模型推理(forward())

前面提到了如何使用imread()和imshow()函数处理图像文件,本文简单的记录以下OpenCV中使用dnn模块进行网络模型推理的流程,大致的流程为:
(1)模型读取
(2)输入数据准备
(3)模型推理
其中有一些实现的细节没有深入研究

1.模型读取(readNetFromONNX())

readNetFromONNX()用于从指定路径读取onnx模型,函数定义在sources/modules/dnn/src/onnx/onnx_importer.cpp中

Net readNetFromONNX(const String& onnxFile)
{return detail::readNetDiagnostic<ONNXImporter>(onnxFile.c_str());
}

detail::readNetDiagnostic<ONNXImporter>的定义位于sources/modules/dnn/src/common_dnn.hpp中,用于导入一个网络模型

template <typename Importer, typename ... Args>
Net readNetDiagnostic(Args&& ... args)
{Net maybeDebugNet = readNet<Importer>(std::forward<Args>(args)...);// DNN_DIAGNOSTICS_RUN默认为falseif (DNN_DIAGNOSTICS_RUN && !DNN_SKIP_REAL_IMPORT){// if we just imported the net in diagnostic mode, disable it and import againenableModelDiagnostics(false);Net releaseNet = readNet<Importer>(std::forward<Args>(args)...);enableModelDiagnostics(true);return releaseNet;}return maybeDebugNet;
}

readNet()读取了具体的网络模型

template <typename Importer, typename ... Args>
Net readNet(Args&& ... args)
{Net net;Importer importer(net, std::forward<Args>(args)...);return net;
}

如果是ONNX模型,调用的是ONNXImporter的构造函数

ONNXImporter::ONNXImporter(Net& net, const char *onnxFile): layerHandler(DNN_DIAGNOSTICS_RUN ? new ONNXLayerHandler(this) : nullptr), dstNet(net), onnx_opset(0), useLegacyNames(getParamUseLegacyNames())
{hasDynamicShapes = false;CV_Assert(onnxFile);CV_LOG_DEBUG(NULL, "DNN/ONNX: processing ONNX model from file: " << onnxFile);std::fstream input(onnxFile, std::ios::in | std::ios::binary);if (!input){CV_Error(Error::StsBadArg, cv::format("Can't read ONNX file: %s", onnxFile));}// 使用Protocol Buffers库来解析输入的文件流if (!model_proto.ParseFromIstream(&input)){CV_Error(Error::StsUnsupportedFormat, cv::format("Failed to parse ONNX model: %s", onnxFile));}// 前面GraphProto中解析出网络结构,将其转换为OpenCV DNN可运行的内部表示(即构建 dstNet)populateNet();
}

populateNet()的定义

void ONNXImporter::populateNet()
{CV_Assert(model_proto.has_graph());graph_proto = model_proto.mutable_graph();std::string framework_version;if (model_proto.has_producer_name())framework_name = model_proto.producer_name();if (model_proto.has_producer_version())framework_version = model_proto.producer_version();CV_LOG_INFO(NULL, "DNN/ONNX: loading ONNX"<< (model_proto.has_ir_version() ? cv::format(" v%d", (int)model_proto.ir_version()) : cv::String())<< " model produced by '" << framework_name << "'"<< (framework_version.empty() ? cv::String() : cv::format(":%s", framework_version.c_str()))<< ". Number of nodes = " << graph_proto->node_size()<< ", initializers = " << graph_proto->initializer_size()<< ", inputs = " << graph_proto->input_size()<< ", outputs = " << graph_proto->output_size());/* 1.解析操作符集合版本,初始化解析函数 */ parseOperatorSet();// 简化graphsimplifySubgraphs(*graph_proto);const int layersSize = graph_proto->node_size();CV_LOG_DEBUG(NULL, "DNN/ONNX: graph simplified to " << layersSize << " nodes");/* 2.提取张量 */// graph_proto表示ONNX图的原型对象,包含模型的所有信息,包括初始化器、输入输出等constBlobs = getGraphTensors(*graph_proto);  // scan GraphProto.initializerstd::vector<String> netInputs;  // map with network inputs (without const blobs)// Add all the inputs shapes. It includes as constant blobs as network's inputs shapes.for (int i = 0; i < graph_proto->input_size(); ++i){const opencv_onnx::ValueInfoProto& valueInfoProto = graph_proto->input(i);CV_Assert(valueInfoProto.has_name());const std::string& name = valueInfoProto.name();CV_Assert(valueInfoProto.has_type());const opencv_onnx::TypeProto& typeProto = valueInfoProto.type();CV_Assert(typeProto.has_tensor_type());const opencv_onnx::TypeProto::Tensor& tensor = typeProto.tensor_type();CV_Assert(tensor.has_shape());const opencv_onnx::TensorShapeProto& tensorShape = tensor.shape();int dim_size = tensorShape.dim_size();CV_CheckGE(dim_size, 0, "");  // some inputs are scalars (dims=0), e.g. in Test_ONNX_nets.Resnet34_kinetics testMatShape inpShape(dim_size);for (int j = 0; j < dim_size; ++j){const opencv_onnx::TensorShapeProto_Dimension& dimension = tensorShape.dim(j);if (dimension.has_dim_param()){CV_LOG_DEBUG(NULL, "DNN/ONNX: input[" << i << "] dim[" << j << "] = <" << dimension.dim_param() << "> (dynamic)");}// https://github.com/onnx/onnx/blob/master/docs/DimensionDenotation.md#denotation-definitionif (dimension.has_denotation()){CV_LOG_INFO(NULL, "DNN/ONNX: input[" << i << "] dim[" << j << "] denotation is '" << dimension.denotation() << "'");}inpShape[j] = dimension.dim_value();// NHW, NCHW(NHWC), NCDHW(NDHWC); do not set this flag if only N is dynamicif (dimension.has_dim_param() && !(j == 0 && inpShape.size() >= 3)){hasDynamicShapes = true;}}bool isInitialized = ((constBlobs.find(name) != constBlobs.end()));CV_LOG_IF_DEBUG(NULL, !isInitialized, "DNN/ONNX: input[" << i << " as '" << name << "'] shape=" << toString(inpShape));CV_LOG_IF_VERBOSE(NULL, 0, isInitialized, "DNN/ONNX: pre-initialized input[" << i << " as '" << name << "'] shape=" << toString(inpShape));if (dim_size > 0 && !hasDynamicShapes)  // FIXIT result is not reliable for models with multiple inputs{inpShape[0] = std::max(inpShape[0], 1); // It's OK to have undetermined batch size}outShapes[valueInfoProto.name()] = inpShape;// fill map: push layer name, layer id and output idif (!isInitialized){netInputs.push_back(name);layer_id.insert(std::make_pair(name, LayerInfo(0, netInputs.size() - 1)));}}dstNet.setInputsNames(netInputs);if (!hasDynamicShapes){for (int i = 0; i < netInputs.size(); ++i)dstNet.setInputShape(netInputs[i], outShapes[netInputs[i]]);}// dump outputsfor (int i = 0; i < graph_proto->output_size(); ++i){dumpValueInfoProto(i, graph_proto->output(i), "output");}if (DNN_DIAGNOSTICS_RUN) {CV_LOG_INFO(NULL, "DNN/ONNX: start diagnostic run!");layerHandler->fillRegistry(*graph_proto);}/* 3.将节点转换成OpenCV内部层结构 */for(int li = 0; li < layersSize; li++) // layersSize表示一共有多少层{const opencv_onnx::NodeProto& node_proto = graph_proto->node(li);handleNode(node_proto);}// register outputsfor (int i = 0; i < graph_proto->output_size(); ++i){const std::string& output_name = graph_proto->output(i).name();if (output_name.empty()){CV_LOG_ERROR(NULL, "DNN/ONNX: can't register output without name: " << i);continue;}ConstIterLayerId_t layerIt = layer_id.find(output_name);if (layerIt == layer_id.end()){CV_LOG_ERROR(NULL, "DNN/ONNX: can't find layer for output name: '" << output_name << "'. Does model imported properly?");continue;}const LayerInfo& li = layerIt->second;int outputId = dstNet.registerOutput(output_name, li.layerId, li.outputId); CV_UNUSED(outputId);// no need to duplicate message from engine: CV_LOG_DEBUG(NULL, "DNN/ONNX: registered output='" << output_name << "' with id=" << outputId);}CV_LOG_DEBUG(NULL, (DNN_DIAGNOSTICS_RUN ? "DNN/ONNX: diagnostic run completed!" : "DNN/ONNX: import completed!"));
}

1.1 初始化解析函数(parseOperatorSet())

parseOperatorSet()的核心函数是buildDispatchMap_ONNX_AI(),用于构建解析函数map

void ONNXImporter::parseOperatorSet()
{// ...CV_LOG_INFO(NULL, "DNN/ONNX: ONNX opset version = " << onnx_opset);// 构建解析函数map,后续会基于这个map进行模型各模块的解析buildDispatchMap_ONNX_AI(onnx_opset);for (const auto& pair : onnx_opset_map){if (pair.first == str_domain_ai_onnx){continue;  // done above}else if (pair.first == "com.microsoft"){buildDispatchMap_COM_MICROSOFT(pair.second);}else{CV_LOG_INFO(NULL, "DNN/ONNX: unknown domain='" << pair.first << "' version=" << pair.second << ". No dispatch map, you may need to register 'custom' layers.");}}
}

1.2 提取张量(getGraphTensors())

getGraphTensors()用于将文件中提取的张量数据转换成OpenCV的mat格式,便于后续处理

std::map<std::string, Mat> ONNXImporter::getGraphTensors(const opencv_onnx::GraphProto& graph_proto)
{std::map<std::string, Mat> layers_weights;for (int i = 0; i < graph_proto.initializer_size(); i++){const opencv_onnx::TensorProto& tensor_proto = graph_proto.initializer(i);// 打印tensor信息dumpTensorProto(i, tensor_proto, "initializer");// 将tensor信息转换成mat,便于后续构建OpenCV的神经网络Mat mat = getMatFromTensor(tensor_proto);// 丢弃掉已经处理过的数据releaseONNXTensor(const_cast<opencv_onnx::TensorProto&>(tensor_proto));  // drop already loaded dataif (DNN_DIAGNOSTICS_RUN && mat.empty())continue;// 数据插入队列layers_weights.insert(std::make_pair(tensor_proto.name(), mat));constBlobsExtraInfo.insert(std::make_pair(tensor_proto.name(), TensorInfo(tensor_proto.dims_size())));}return layers_weights;
}

getMatFromTensor()

Mat getMatFromTensor(const opencv_onnx::TensorProto& tensor_proto)
{if (tensor_proto.raw_data().empty() && tensor_proto.float_data().empty() &&tensor_proto.double_data().empty() && tensor_proto.int64_data().empty() &&tensor_proto.int32_data().empty())return Mat();opencv_onnx::TensorProto_DataType datatype = tensor_proto.data_type();Mat blob;std::vector<int> sizes;for (int i = 0; i < tensor_proto.dims_size(); i++) {sizes.push_back(tensor_proto.dims(i));}if (sizes.empty())sizes.assign(1, 1);if (datatype == opencv_onnx::TensorProto_DataType_FLOAT) {if (!tensor_proto.float_data().empty()) {const ::google::protobuf::RepeatedField<float> field = tensor_proto.float_data();Mat(sizes, CV_32FC1, (void*)field.data()).copyTo(blob);}else {char* val = const_cast<char*>(tensor_proto.raw_data().c_str());// 构建mat,将读取的信息传递给blobMat(sizes, CV_32FC1, val).copyTo(blob);}}// ...

1.3 节点处理(handleNode())

它的作用是 处理 ONNX 模型中的一个节点(node),也就是一个算子(operator),并将其转换为 OpenCV 内部可用的层(Layer)结构。这个函数是整个 ONNX 模型导入流程中的核心部分

void ONNXImporter::handleNode(const opencv_onnx::NodeProto& node_proto)
{CV_Assert(node_proto.output_size() >= 1);const std::string& name = extractNodeName(node_proto);const std::string& layer_type = node_proto.op_type();const std::string& layer_type_domain = getLayerTypeDomain(node_proto);const auto& dispatch = getDispatchMap(node_proto);CV_LOG_INFO(NULL, "DNN/ONNX: processing node with " << node_proto.input_size() << " inputs and "<< node_proto.output_size() << " outputs: "<< cv::format("[%s]:(%s)", layer_type.c_str(), name.c_str())<< cv::format(" from %sdomain='", onnx_opset_map.count(layer_type_domain) == 1 ? "" : "undeclared ")<< layer_type_domain << "'");// 检查是否有对应的解析器if (dispatch.empty()){CV_LOG_WARNING(NULL, "DNN/ONNX: missing dispatch map for domain='" << layer_type_domain << "'");}LayerParams layerParams;try{// FIXIT not all cases can be repacked into "LayerParams". Importer should handle such cases directly for each "layer_type"layerParams = getLayerParams(node_proto);layerParams.name = name;layerParams.type = layer_type;layerParams.set("has_dynamic_shapes", hasDynamicShapes);setParamsDtype(layerParams, node_proto);DispatchMap::const_iterator iter = dispatch.find(layer_type);if (iter != dispatch.end()){// 调用解析函数CALL_MEMBER_FN(*this, iter->second)(layerParams, node_proto);}else{parseCustomLayer(layerParams, node_proto);}}// ...
}

这里会根据具体的模块调用对应的解析函数,例如卷积模块(Conv)会调用parseConv()

void ONNXImporter::parseConv(LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto_)
{opencv_onnx::NodeProto node_proto = node_proto_;// /* 卷积层至少需要2个输入(1) 特征图 feature map(2) 输入权重 weight tensor(3) 偏置项 bias(可选)*/CV_Assert(node_proto.input_size() >= 2);// 表明类型为卷积层,后续据此构建对应的卷积层类layerParams.type = "Convolution";// 提取权重和偏置项for (int j = 1; j < node_proto.input_size(); j++) {// 查找是否是常量张量if (constBlobs.find(node_proto.input(j)) != constBlobs.end()){// 通过getBlob()获取对应的mat数据,送入blobs中layerParams.blobs.push_back(getBlob(node_proto, j));}}int outCn = layerParams.blobs.empty() ? outShapes[node_proto.input(1)][0] : layerParams.blobs[0].size[0];layerParams.set("num_output", outCn);addLayer(layerParams, node_proto);
}

2.数据准备(blobFromImage() & setInput())

cv::dnn::blobFromImage()的作用是将输入的mat格式数据转换成为4维的blob(batch,channel,height,width),便于后续的网络推理。函数的定义位于sources/modules/dnn/src/dnn_utils.cpp

void blobFromImage(InputArray image, OutputArray blob, double scalefactor,const Size& size, const Scalar& mean, bool swapRB, bool crop, int ddepth)
{CV_TRACE_FUNCTION();if (image.kind() == _InputArray::UMAT) {// UMat是OpenCL加速的图像表示std::vector<UMat> images(1, image.getUMat());blobFromImages(images, blob, scalefactor, size, mean, swapRB, crop, ddepth);} else {std::vector<Mat> images(1, image.getMat());blobFromImages(images, blob, scalefactor, size, mean, swapRB, crop, ddepth);}
}

经过多层调用之后,blobFromImagesNCHWImpl()用于进行实际的格式转换任务

// Tinp = uint8_t, Tout = float
template<typename Tinp, typename Tout>
void blobFromImagesNCHWImpl(const std::vector<Mat>& images, Mat& blob_, const Image2BlobParams& param)
{int w = images[0].cols;int h = images[0].rows;int wh = w * h;int nch = images[0].channels();CV_Assert(nch == 1 || nch == 3 || nch == 4);int sz[] = { (int)images.size(), nch, h, w};blob_.create(4, sz, param.ddepth);for (size_t k = 0; k < images.size(); ++k) // 遍历所有输入图像{CV_Assert(images[k].depth() == images[0].depth());CV_Assert(images[k].channels() == images[0].channels());CV_Assert(images[k].size() == images[0].size());// RGBA格式,A是透明度通道Tout* p_blob = blob_.ptr<Tout>() + k * nch * wh;Tout* p_blob_r = p_blob;Tout* p_blob_g = p_blob + wh;Tout* p_blob_b = p_blob + 2 * wh;Tout* p_blob_a = p_blob + 3 * wh;if (param.swapRB) // 默认为truestd::swap(p_blob_r, p_blob_b);for (size_t i = 0; i < h; ++i){const Tinp* p_img_row = images[k].ptr<Tinp>(i);if (nch == 1) // 灰度图像{for (size_t j = 0; j < w; ++j){p_blob[i * w + j] = p_img_row[j];}}else if (nch == 3) // RGB{/* p_img_row的存储格式 (RGB, 如果swapRB, 格式为BGR)r g b r g b r g b ...p_blob的存储格式r0 r1 r2 ... g0 g1 g2 ... b0 b1 b2 ...*/for (size_t j = 0; j < w; ++j){p_blob_r[i * w + j] = p_img_row[j * 3    ];p_blob_g[i * w + j] = p_img_row[j * 3 + 1];p_blob_b[i * w + j] = p_img_row[j * 3 + 2];}}else // if (nch == 4) -> RGBA{/* p_img_row的存储格式 (RGBA, 如果swapRB, 格式为BGRA)r g b a r g b a r g b a ...p_blob的存储格式r0 r1 r2 ... g0 g1 g2 ... b0 b1 b2 ... a0 a1 a2 ...*/for (size_t j = 0; j < w; ++j){p_blob_r[i * w + j] = p_img_row[j * 4    ];p_blob_g[i * w + j] = p_img_row[j * 4 + 1];p_blob_b[i * w + j] = p_img_row[j * 4 + 2];p_blob_a[i * w + j] = p_img_row[j * 4 + 3];}}}}if (param.mean == Scalar() && param.scalefactor == Scalar::all(1.0))return;CV_CheckTypeEQ(param.ddepth, CV_32F, "Scaling and mean substraction is supported only for CV_32F blob depth");/* 应用给定的均值减法和缩放因子 */for (size_t k = 0; k < images.size(); ++k){for (size_t ch = 0; ch < nch; ++ch){float cur_mean = param.mean[ch];float cur_scale = param.scalefactor[ch];Tout* p_blob = blob_.ptr<Tout>() + k * nch * wh + ch * wh;for (size_t i = 0; i < wh; ++i){p_blob[i] = (p_blob[i] - cur_mean) * cur_scale;}}}
}

setInput()用于为神经网络设置输入数据

void Net::setInput(InputArray blob, const String& name, double scalefactor, const Scalar& mean)
{CV_TRACE_FUNCTION();CV_TRACE_ARG_VALUE(name, "name", name.c_str());CV_Assert(impl);return impl->setInput(blob, name, scalefactor, mean);
}

impl->setInput()调用的是Impl::setInput()

void Net::Impl::setInput(InputArray blob, const String& name, double scalefactor, const Scalar& mean)
{// 性能优化手段,用来避免在浮点运算中处理极小值(denormal numbers),提高计算效率FPDenormalsIgnoreHintScope fp_denormals_ignore_scope;/* 举例:Input -> Conv1 -> ReLU -> Conv2 -> Output\            /\-> MaxPool如果 Conv2 的输入来自 ReLU 和 MaxPool,则需要两个 LayerPin:{lid_of_ReLU, 0}(ReLU的第1个输出){lid_of_MaxPool, 0}(MaxPool的第1个输出)*/// LayerPin表示某一层的输入或输出节点,类似于插槽LayerPin pin;// 0表示为当前网络的输入层,即当前的pin连接的是网络的输入端口pin.lid = 0; // 根据层的名称解析出对应的输出端口索引pin.oid = resolvePinOutputName(getLayerData(pin.lid), name);if (!pin.valid())CV_Error(Error::StsObjectNotFound, "Requested blob \"" + name + "\" not found");Mat blob_ = blob.getMat();  // can't use InputArray directly due MatExpr stuffMatShape blobShape = shape(blob_);// ...LayerData& ld = layers[pin.lid];const int numInputs = std::max(pin.oid + 1, (int)ld.requiredOutputs.size());ld.outputBlobs.resize(numInputs);ld.outputBlobsWrappers.resize(numInputs);netInputLayer->inputsData.resize(numInputs);netInputLayer->scaleFactors.resize(numInputs);netInputLayer->means.resize(numInputs);MatShape prevShape = shape(netInputLayer->inputsData[pin.oid]);bool oldShape = prevShape == blobShape;// 拷贝数据到输入层blob_.copyTo(netInputLayer->inputsData[pin.oid]);if (!oldShape)ld.outputBlobs[pin.oid] = netInputLayer->inputsData[pin.oid];if (!ld.outputBlobsWrappers[pin.oid].empty()){ld.outputBlobsWrappers[pin.oid]->setHostDirty();}netInputLayer->scaleFactors[pin.oid] = scalefactor;netInputLayer->means[pin.oid] = mean;netWasAllocated = netWasAllocated && oldShape;
}

3.模型推理(forward())

网络模型的推理由Net::forward()函数实现,定义在sources/modules/dnn/src/net.cpp中

void Net::forward(OutputArrayOfArrays outputBlobs,const std::vector<String>& outBlobNames)
{CV_TRACE_FUNCTION();CV_Assert(impl);CV_Assert(!empty());return impl->forward(outputBlobs, outBlobNames);
}// outBlobNames的输入由getUnconnectedOutLayersNames()给出,
// 用于获取网络模型中所有未连接输出的层的名称,这些层通常是指那些没有被其他层作为输入使用的输出层,
// 它们代表了神经网络的最终输出
std::vector<String> Net::getUnconnectedOutLayersNames() const
{CV_TRACE_FUNCTION();CV_Assert(impl);return impl->getUnconnectedOutLayersNames();
}

impl->forward()的定义

void Net::Impl::forward(OutputArrayOfArrays outputBlobs,const std::vector<String>& outBlobNames)
{CV_Assert(!empty());FPDenormalsIgnoreHintScope fp_denormals_ignore_scope;std::vector<LayerPin> pins;for (int i = 0; i < outBlobNames.size(); i++){// getPinByAlias()通过别名找到该层的输出端口信息,包括层ID和编号// 如果只有一个outBlobNames,只找到一个输出端口pins.push_back(getPinByAlias(outBlobNames[i]));}// 初始化网络结构setUpNet(pins);// 找到最后一个需要处理的层LayerPin out = getLatestLayerPin(pins);// 执行前向传播forwardToLayer(getLayerData(out.lid));// 遍历所有输出节点,提取每层的输出 blobstd::vector<Mat> matvec;for (int i = 0; i < pins.size(); i++){matvec.push_back(getBlob(pins[i]));}outputBlobs.create((int)matvec.size(), 1, CV_32F/*FIXIT*/, -1);  // allocate vectoroutputBlobs.assign(matvec);
}

forwardToLayer()具体执行推理的任务,定义在net_impl.cpp中

void Net::Impl::forwardToLayer(LayerData& ld, bool clearFlags)
{CV_TRACE_FUNCTION();// clearFlags为true,表明要遍历网络中所有的层if (clearFlags){// 遍历所有层,将其中的标记状态置为0for (MapIdToLayerData::iterator it = layers.begin(); it != layers.end(); it++)it->second.flag = 0;}// already was forwardedif (ld.flag)return;// forward parents// 遍历所有层,找到目标层之前的所有父层,确保所有父层都进行过推理// 由于函数输入层是最后一个输出层,所以这里会对前面网络的层都进行一遍推理for (MapIdToLayerData::iterator it = layers.begin(); it != layers.end() && (it->second.id < ld.id); ++it){LayerData& ld = it->second;if (ld.flag)continue;forwardLayer(ld);}// forward itselfforwardLayer(ld);#ifdef HAVE_CUDAif (preferableBackend == DNN_BACKEND_CUDA)cudaInfo->context.stream.synchronize();
#endif
}

forwardLayer()调用的是net_impl.cpp中的文件,其中会调用layer->forward(inps, ld.outputBlobs, ld.internals);,后面会根据当前层类型调用推理函数,以Conv卷积层为例,会调用convolution_layer.cpp中的forward()函数,执行具体的fastConv()操作,卷积操作涉及的比较底层,不再深入分析。

void forward(InputArrayOfArrays inputs_arr, OutputArrayOfArrays outputs_arr, OutputArrayOfArrays internals_arr) CV_OVERRIDE
{CV_TRACE_FUNCTION();CV_TRACE_ARG_VALUE(name, "name", name.c_str()); // name为当前层名称,例如"model.0/Conv"CV_OCL_RUN(IS_DNN_OPENCL_TARGET(preferableTarget),forward_ocl(inputs_arr, outputs_arr, internals_arr))// ...{int nstripes = std::max(getNumThreads(), 1);int conv_dim = CONV_2D;if (inputs[0].dims == 3)conv_dim = CONV_1D;if (inputs[0].dims == 5)conv_dim = CONV_3D;// Initialization of FastCovn2d, pack weight.if (!fastConvImpl || variableWeight){int K = outputs[0].size[1];int C = inputs[0].size[1];// Winograd only works when input h and w >= 12.bool canUseWinograd = useWinograd && conv_dim == CONV_2D && inputs[0].size[2] >= 12 && inputs[0].size[3] >= 12;CV_Assert(outputs[0].size[1] % ngroups == 0);// 初始化fastConvfastConvImpl = initFastConv(weightsMat, &biasvec[0], ngroups, K, C, kernel_size, strides,dilations, pads_begin, pads_end, conv_dim,preferableTarget == DNN_TARGET_CPU_FP16, canUseWinograd);// This is legal to release weightsMat here as this is not used anymore for// OpenCV inference. If network needs to be reinitialized (new shape, new backend)// a new version of weightsMat is created at .finalize() from original weightsweightsMat.release();}// 执行fastConvrunFastConv(inputs[0], outputs[0], fastConvImpl, nstripes, activ, reluslope, fusedAdd);}
}

net.forward()使用举例,net.getUnconnectedOutLayersNames()会获取所有未连接输出的层的名称,这些层通常是指那些没有被其他层作为输入的层,通常就是输出层,这里表明会进行整个网络模型的推理。推理的结果会被存储到outputs中。

std::vector<cv::Mat> outputs;
net.forward(outputs, net.getUnconnectedOutLayersNames());

如果想要获得其中某些层,例如第{10, 20, 30}层的中间结果,可以修改net.forward()的入参

std::vector<cv::Mat> outputs;
net.forward(outputs,{10, 20, 30}层的名称);

相关文章:

  • 大容量存储的高性能 T-BOX 方案对智能网联汽车的支撑
  • 汽车工厂数字孪生实时监控技术从数据采集到三维驱动实现
  • 数字孪生实时监控汽车零部件工厂智能化巡检新范式
  • 修改(替换)文件中的指定内容并保留文件修改前的时间(即修改前后文件的最后修改时间保持不变)
  • [学习] RTKLib详解:qzslex.c、rcvraw.c与solution.c
  • matlab多智能体网络一致性研究
  • Linux(1)编译链接和gcc
  • 动态域名服务ddns怎么设置?如何使用路由器动态域名解析让外网访问内网?
  • Vitrualbox完美显示系统界面(只需三步)
  • 【源码+文档+调试讲解】党员之家服务系统小程序1
  • 数据治理域——日志数据采集设计
  • k8s之k8s集群部署
  • 【漫话机器学习系列】257.填补缺失值(Imputing Missing Values)
  • 【LeetCode 热题 100】二叉树的最大深度 / 翻转二叉树 / 二叉树的直径 / 验证二叉搜索树
  • 机器学习——聚类算法练习题
  • 我们该如何使用DeepSeek帮我们减负?
  • flowable
  • 芦康沙妥珠单抗说明书摘要
  • MUSE Pi Pro 编译kernel内核及创建自动化脚本进行环境配置
  • Azure 应用的托管身份与服务主体
  • “75万买299元路由器”事件进展:重庆市纪委等三部门联合介入调查
  • 青海规范旅游包车行为:不得引导外省籍旅游包车违规驻地运营
  • 金科股份重整方案通过,正式进入重整计划执行环节
  • 5.19中国旅游日,上海56家景区景点限时门票半价
  • 当创业热土遇上年轻气息,上海南汇新城发展如何再发力?
  • 巴基斯坦空袭印度多地空军基地,巴战机进入印领空