当前位置: 首页 > news >正文

error report

build/X86_VI_hammer_GPU/mem/ruby/network/garnet/fixed-pipeline/OutputUnit_d.cc: In member function ‘int OutputUnit_d::getVCBufferOccupancy(int)’:
build/X86_VI_hammer_GPU/mem/ruby/network/garnet/fixed-pipeline/OutputUnit_d.cc:135:40: error: no matching function for call to ‘flitBuffer_d::isReady()’
if (m_out_buffer_vcis[vc]->isReady()) {
^
build/X86_VI_hammer_GPU/mem/ruby/network/garnet/fixed-pipeline/OutputUnit_d.cc:135:40: note: candidate is:
In file included from build/X86_VI_hammer_GPU/mem/ruby/network/garnet/fixed-pipeline/NetworkLink_d.hh:38:0,
from build/X86_VI_hammer_GPU/mem/ruby/network/garnet/fixed-pipeline/CreditLink_d.hh:34,
from build/X86_VI_hammer_GPU/mem/ruby/network/garnet/fixed-pipeline/OutputUnit_d.hh:38,
from build/X86_VI_hammer_GPU/mem/ruby/network/garnet/fixed-pipeline/OutputUnit_d.cc:32:
build/X86_VI_hammer_GPU/mem/ruby/network/garnet/fixed-pipeline/flitBuffer_d.hh:47:10: note: bool flitBuffer_d::isReady(Cycles)
bool isReady(Cycles curTime);
^
build/X86_VI_hammer_GPU/mem/ruby/network/garnet/fixed-pipeline/flitBuffer_d.hh:47:10: note: candidate expects 1 argument, 0 provided
scons: *** [build/X86_VI_hammer_GPU/mem/ruby/network/garnet/fixed-pipeline/OutputUnit_d.o] Error 1
scons: building terminated because of errors.

In file included from build/X86_VI_hammer_GPU/mem/ruby/network/garnet/fixed-pipeline/Router_d.cc:41:0:
build/X86_VI_hammer_GPU/mem/ruby/network/garnet/fixed-pipeline/VCallocator_d.hh: In member function ‘void Router_d::collateStats()’:
build/X86_VI_hammer_GPU/mem/ruby/network/garnet/fixed-pipeline/VCallocator_d.hh:90:25: error: ‘std::vector VCallocator_d::m_local_arbiter_activity’ is private
std::vector m_local_arbiter_activity;
^
build/X86_VI_hammer_GPU/mem/ruby/network/garnet/fixed-pipeline/Router_d.cc:207:17: error: within this context
m_vc_alloc->m_local_arbiter_activity = m_vc_local_arbiter_activity;
^
build/X86_VI_hammer_GPU/mem/ruby/network/garnet/fixed-pipeline/Router_d.cc:207:42: error: no match for ‘operator=’ (operand types are ‘std::vector’ and ‘Stats::Scalar’)
m_vc_alloc->m_local_arbiter_activity = m_vc_local_arbiter_activity;
^
build/X86_VI_hammer_GPU/mem/ruby/network/garnet/fixed-pipeline/Router_d.cc:207:42: note: candidates are:
In file included from /usr/include/c++/4.8/vector:69:0,
from /usr/include/c++/4.8/bits/random.h:34,
from /usr/include/c++/4.8/random:50,
from /usr/include/c++/4.8/bits/stl_algo.h:65,
from /usr/include/c++/4.8/algorithm:62,
from build/X86_VI_hammer_GPU/base/stl_helpers.hh:34,
from build/X86_VI_hammer_GPU/mem/ruby/network/garnet/fixed-pipeline/Router_d.cc:31:
/usr/include/c++/4.8/bits/vector.tcc:160:5: note: std::vector<_Tp, _Alloc>& std::vector<_Tp, _Alloc>::operator=(const std::vector<_Tp, _Alloc>&) [with _Tp = double; _Alloc = std::allocator]
vector<_Tp, _Alloc>::
^
/usr/include/c++/4.8/bits/vector.tcc:160:5: note: no known conversion for argument 1 from ‘Stats::Scalar’ to ‘const std::vector&’
In file included from /usr/include/c++/4.8/vector:64:0,
from /usr/include/c++/4.8/bits/random.h:34,
from /usr/include/c++/4.8/random:50,
from /usr/include/c++/4.8/bits/stl_algo.h:65,
from /usr/include/c++/4.8/algorithm:62,
from build/X86_VI_hammer_GPU/base/stl_helpers.hh:34,
from build/X86_VI_hammer_GPU/mem/ruby/network/garnet/fixed-pipeline/Router_d.cc:31:
/usr/include/c++/4.8/bits/stl_vector.h:439:7: note: std::vector<_Tp, _Alloc>& std::vector<_Tp, _Alloc>::operator=(std::vector<_Tp, _Alloc>&&) [with _Tp = double; _Alloc = std::allocator]
operator=(vector&& __x) noexcept(_Alloc_traits::_S_nothrow_move())
^
/usr/include/c++/4.8/bits/stl_vector.h:439:7: note: no known conversion for argument 1 from ‘Stats::Scalar’ to ‘std::vector&&’
/usr/include/c++/4.8/bits/stl_vector.h:461:7: note: std::vector<_Tp, _Alloc>& std::vector<_Tp, _Alloc>::operator=(std::initializer_list<_Tp>) [with _Tp = double; _Alloc = std::allocator]
operator=(initializer_list<value_type> __l)
^
/usr/include/c++/4.8/bits/stl_vector.h:461:7: note: no known conversion for argument 1 from ‘Stats::Scalar’ to ‘std::initializer_list’
In file included from build/X86_VI_hammer_GPU/mem/ruby/network/garnet/fixed-pipeline/Router_d.cc:41:0:
build/X86_VI_hammer_GPU/mem/ruby/network/garnet/fixed-pipeline/VCallocator_d.hh:91:25: error: ‘std::vector VCallocator_d::m_global_arbiter_activity’ is private
std::vector m_global_arbiter_activity;
^
build/X86_VI_hammer_GPU/mem/ruby/network/garnet/fixed-pipeline/Router_d.cc:208:17: error: within this context
m_vc_alloc->m_global_arbiter_activity = m_vc_global_arbiter_activity;
^
build/X86_VI_hammer_GPU/mem/ruby/network/garnet/fixed-pipeline/Router_d.cc:208:43: error: no match for ‘operator=’ (operand types are ‘std::vector’ and ‘Stats::Scalar’)
m_vc_alloc->m_global_arbiter_activity = m_vc_global_arbiter_activity;
^
build/X86_VI_hammer_GPU/mem/ruby/network/garnet/fixed-pipeline/Router_d.cc:208:43: note: candidates are:
In file included from /usr/include/c++/4.8/vector:69:0,
from /usr/include/c++/4.8/bits/random.h:34,
from /usr/include/c++/4.8/random:50,
from /usr/include/c++/4.8/bits/stl_algo.h:65,
from /usr/include/c++/4.8/algorithm:62,
from build/X86_VI_hammer_GPU/base/stl_helpers.hh:34,
from build/X86_VI_hammer_GPU/mem/ruby/network/garnet/fixed-pipeline/Router_d.cc:31:
/usr/include/c++/4.8/bits/vector.tcc:160:5: note: std::vector<_Tp, _Alloc>& std::vector<_Tp, _Alloc>::operator=(const std::vector<_Tp, _Alloc>&) [with _Tp = double; _Alloc = std::allocator]
vector<_Tp, _Alloc>::
^
/usr/include/c++/4.8/bits/vector.tcc:160:5: note: no known conversion for argument 1 from ‘Stats::Scalar’ to ‘const std::vector&’
In file included from /usr/include/c++/4.8/vector:64:0,
from /usr/include/c++/4.8/bits/random.h:34,
from /usr/include/c++/4.8/random:50,
from /usr/include/c++/4.8/bits/stl_algo.h:65,
from /usr/include/c++/4.8/algorithm:62,
from build/X86_VI_hammer_GPU/base/stl_helpers.hh:34,
from build/X86_VI_hammer_GPU/mem/ruby/network/garnet/fixed-pipeline/Router_d.cc:31:
/usr/include/c++/4.8/bits/stl_vector.h:439:7: note: std::vector<_Tp, _Alloc>& std::vector<_Tp, _Alloc>::operator=(std::vector<_Tp, _Alloc>&&) [with _Tp = double; _Alloc = std::allocator]
operator=(vector&& __x) noexcept(_Alloc_traits::_S_nothrow_move())
^
/usr/include/c++/4.8/bits/stl_vector.h:439:7: note: no known conversion for argument 1 from ‘Stats::Scalar’ to ‘std::vector&&’
/usr/include/c++/4.8/bits/stl_vector.h:461:7: note: std::vector<_Tp, _Alloc>& std::vector<_Tp, _Alloc>::operator=(std::initializer_list<_Tp>) [with _Tp = double; _Alloc = std::allocator]
operator=(initializer_list<value_type> __l)
^
/usr/include/c++/4.8/bits/stl_vector.h:461:7: note: no known conversion for argument 1 from ‘Stats::Scalar’ to ‘std::initializer_list’
In file included from build/X86_VI_hammer_GPU/mem/ruby/network/garnet/fixed-pipeline/Router_d.cc:39:0:
build/X86_VI_hammer_GPU/mem/ruby/network/garnet/fixed-pipeline/SWallocator_d.hh:73:12: error: ‘double SWallocator_d::m_local_arbiter_activity’ is private
double m_local_arbiter_activity, m_global_arbiter_activity;
^
build/X86_VI_hammer_GPU/mem/ruby/network/garnet/fixed-pipeline/Router_d.cc:209:17: error: within this context
m_sw_alloc->m_local_arbiter_activity = m_sw_local_arbiter_activity;
^
build/X86_VI_hammer_GPU/mem/ruby/network/garnet/fixed-pipeline/Router_d.cc:209:42: error: cannot convert ‘Stats::Scalar’ to ‘double’ in assignment
m_sw_alloc->m_local_arbiter_activity = m_sw_local_arbiter_activity;
^
In file included from build/X86_VI_hammer_GPU/mem/ruby/network/garnet/fixed-pipeline/Router_d.cc:39:0:
build/X86_VI_hammer_GPU/mem/ruby/network/garnet/fixed-pipeline/SWallocator_d.hh:73:38: error: ‘double SWallocator_d::m_global_arbiter_activity’ is private
double m_local_arbiter_activity, m_global_arbiter_activity;
^
build/X86_VI_hammer_GPU/mem/ruby/network/garnet/fixed-pipeline/Router_d.cc:210:17: error: within this context
m_sw_alloc->m_global_arbiter_activity = m_sw_global_arbiter_activity;
^
build/X86_VI_hammer_GPU/mem/ruby/network/garnet/fixed-pipeline/Router_d.cc:210:43: error: cannot convert ‘Stats::Scalar’ to ‘double’ in assignment
m_sw_alloc->m_global_arbiter_activity = m_sw_global_arbiter_activity;
^
In file included from build/X86_VI_hammer_GPU/mem/ruby/network/garnet/fixed-pipeline/Router_d.cc:40:0:
build/X86_VI_hammer_GPU/mem/ruby/network/garnet/fixed-pipeline/Switch_d.hh:64:12: error: ‘double Switch_d::m_crossbar_activity’ is private
double m_crossbar_activity;
^
build/X86_VI_hammer_GPU/mem/ruby/network/garnet/fixed-pipeline/Router_d.cc:211:37: error: within this context
m_crossbar_activity = m_switch->m_crossbar_activity;
^
scons: *** [build/X86_VI_hammer_GPU/mem/ruby/network/garnet/fixed-pipeline/Router_d.o] Error 1
scons: building terminated because of errors.


文章转载自:

http://w2dcWvc9.Lrprj.cn
http://ALjgKZrg.Lrprj.cn
http://0E2lxBAW.Lrprj.cn
http://NEDZFznX.Lrprj.cn
http://0vLINl69.Lrprj.cn
http://9CcqoupA.Lrprj.cn
http://XWEfvNjm.Lrprj.cn
http://q792TjDF.Lrprj.cn
http://vQK7ruAv.Lrprj.cn
http://c9KTBj6y.Lrprj.cn
http://WEGE5Es6.Lrprj.cn
http://eWQM4YJM.Lrprj.cn
http://ZgMcqmaX.Lrprj.cn
http://aHFKkOH8.Lrprj.cn
http://8vCeRHl2.Lrprj.cn
http://izROVFCL.Lrprj.cn
http://nzzZbThT.Lrprj.cn
http://34AltWL2.Lrprj.cn
http://5oCMgmYP.Lrprj.cn
http://R8260WhQ.Lrprj.cn
http://LJwpXtCQ.Lrprj.cn
http://8Fvl6FOi.Lrprj.cn
http://OC7rhMkh.Lrprj.cn
http://5SlJsItG.Lrprj.cn
http://uhhmFT0w.Lrprj.cn
http://J7omM1DF.Lrprj.cn
http://QrVfN9gl.Lrprj.cn
http://rMcOzsf2.Lrprj.cn
http://2NE7xBHK.Lrprj.cn
http://ZBmcgHmh.Lrprj.cn
http://www.dtcms.com/a/246776.html

相关文章:

  • 备忘录模式:状态管理的时光机器
  • Elasticsearch 的自动补全以及RestAPI的使用
  • vue3 导出表格,合并单元格,设置单元格宽度,文字居中,修改文字颜色
  • 一篇文章理解js闭包和作用于原理
  • 模板字符串使用点击事件【VUE3】
  • shell使用for循环批量统计文件的行数
  • spring boot项目整合mybatis实现多数据源的配置
  • Day13_C语言基础(C语言考试试卷)
  • 测试完成的标准是什么?
  • CoSchedule Headline Analyzer:分析标题情感强度与可读性
  • 深度学习-163-MCP技术之使用Cherry Studio调用本地自定义mcp-server
  • 【AIGC】Qwen3-Embedding:Embedding与Rerank模型新标杆
  • 为什么电流、电压相同,功率却不同
  • 积分商城拼团系统框架设计
  • ssh连接踢出脚本
  • vulnyx Exec writeup
  • AI基础知识(07):基于 PyTorch 的手写体识别案例手册
  • DNS常用的域名记录
  • shell分析nginx日志的指令
  • COHERENT XPRV23光电接收器控制软件
  • RAG实战:基于LangChain的《肖申克的救赎》知识问答系统构建指南
  • 【读代码】RAG文档解析工具Marker
  • Kubernetes安全机制深度解析(二):从身份认证到资源鉴权
  • 最新Transformer模型及深度学习前沿技术应用
  • 图论 算法1
  • day033-备份服务rsync
  • [Linux] -- 大文件拆分、合并与校验全解析:处理 GB/TB 级文件
  • 将python脚本打包进docker
  • ThreadLocal为什么会导致内存泄漏(详细讲解)
  • 模拟电路的知识