当前位置: 首页 > news >正文

go-ethereum core之statedb

简介

statedb是用来管理以太网状态,如帐户和合约存储等

结构

«interface»
Database
«interface»
Trie
StateDB
stateObject
journal

在state包中,StateDB定义如下

type StateDB struct {db         Databaseprefetcher *triePrefetcherreader     Readertrie       Trie // it's resolved on first access// originalRoot is the pre-state root, before any changes were made.// It will be updated when the Commit is called.originalRoot common.Hash// This map holds 'live' objects, which will get modified while// processing a state transition.stateObjects map[common.Address]*stateObject// This map holds 'deleted' objects. An object with the same address// might also occur in the 'stateObjects' map due to account// resurrection. The account value is tracked as the original value// before the transition. This map is populated at the transaction// boundaries.stateObjectsDestruct map[common.Address]*stateObject// This map tracks the account mutations that occurred during the// transition. Uncommitted mutations belonging to the same account// can be merged into a single one which is equivalent from database's// perspective. This map is populated at the transaction boundaries.mutations map[common.Address]*mutation// DB error.// State objects are used by the consensus core and VM which are// unable to deal with database-level errors. Any error that occurs// during a database read is memoized here and will eventually be// returned by StateDB.Commit. Notably, this error is also shared// by all cached state objects in case the database failure occurs// when accessing state of accounts.dbErr error// The refund counter, also used by state transitioning.refund uint64// The tx context and all occurred logs in the scope of transaction.thash   common.HashtxIndex intlogs    map[common.Hash][]*types.LoglogSize uint// Preimages occurred seen by VM in the scope of block.preimages map[common.Hash][]byte// Per-transaction access listaccessList   *accessListaccessEvents *AccessEvents// Transient storagetransientStorage transientStorage// Journal of state modifications. This is the backbone of// Snapshot and RevertToSnapshot.journal *journal// State witness if cross validation is neededwitness      *stateless.WitnesswitnessStats *stateless.WitnessStats// Measurements gathered during execution for debugging purposesAccountReads    time.DurationAccountHashes   time.DurationAccountUpdates  time.DurationAccountCommits  time.DurationStorageReads    time.DurationStorageUpdates  time.DurationStorageCommits  time.DurationSnapshotCommits time.DurationTrieDBCommits   time.DurationAccountLoaded  int          // Number of accounts retrieved from the database during the state transitionAccountUpdated int          // Number of accounts updated during the state transitionAccountDeleted int          // Number of accounts deleted during the state transitionStorageLoaded  int          // Number of storage slots retrieved from the database during the state transitionStorageUpdated atomic.Int64 // Number of storage slots updated during the state transitionStorageDeleted atomic.Int64 // Number of storage slots deleted during the state transition
}

Database:为接口,用于访问trie和合约代码,CachingDBDatabase接口的一个实现

«interface»
Database
CachingDB
type Database interface {// Reader returns a state reader associated with the specified state root.Reader(root common.Hash) (Reader, error)// OpenTrie opens the main account trie.OpenTrie(root common.Hash) (Trie, error)// OpenStorageTrie opens the storage trie of an account.OpenStorageTrie(stateRoot common.Hash, address common.Address, root common.Hash, trie Trie) (Trie, error)// PointCache returns the cache holding points used in verkle tree key computationPointCache() *utils.PointCache// TrieDB returns the underlying trie database for managing trie nodes.TrieDB() *triedb.Database// Snapshot returns the underlying state snapshot.Snapshot() *snapshot.Tree
}type CachingDB struct {disk          ethdb.KeyValueStoretriedb        *triedb.Databasesnap          *snapshot.TreecodeCache     *lru.SizeConstrainedCache[common.Hash, []byte]codeSizeCache *lru.Cache[common.Hash, int]pointCache    *utils.PointCache// Transition-specific fieldsTransitionStatePerRoot *lru.Cache[common.Hash, *overlay.TransitionState]
}

Trie:是默克尔帕特里夏树接口,其有两个实现,VerkleTrieStateTrie

«interface»
Trie
VerkleTrie
StateTrie
trie.Trie
type Trie interface {// GetKey returns the sha3 preimage of a hashed key that was previously used// to store a value.//// TODO(fjl): remove this when StateTrie is removedGetKey([]byte) []byte// GetAccount abstracts an account read from the trie. It retrieves the// account blob from the trie with provided account address and decodes it// with associated decoding algorithm. If the specified account is not in// the trie, nil will be returned. If the trie is corrupted(e.g. some nodes// are missing or the account blob is incorrect for decoding), an error will// be returned.GetAccount(address common.Address) (*types.StateAccount, error)// PrefetchAccount attempts to resolve specific accounts from the database// to accelerate subsequent trie operations.PrefetchAccount([]common.Address) error// GetStorage returns the value for key stored in the trie. The value bytes// must not be modified by the caller. If a node was not found in the database,// a trie.MissingNodeError is returned.GetStorage(addr common.Address, key []byte) ([]byte, error)// PrefetchStorage attempts to resolve specific storage slots from the database// to accelerate subsequent trie operations.PrefetchStorage(addr common.Address, keys [][]byte) error// UpdateAccount abstracts an account write to the trie. It encodes the// provided account object with associated algorithm and then updates it// in the trie with provided address.UpdateAccount(address common.Address, account *types.StateAccount, codeLen int) error// UpdateStorage associates key with value in the trie. If value has length zero,// any existing value is deleted from the trie. The value bytes must not be modified// by the caller while they are stored in the trie. If a node was not found in the// database, a trie.MissingNodeError is returned.UpdateStorage(addr common.Address, key, value []byte) error// DeleteAccount abstracts an account deletion from the trie.DeleteAccount(address common.Address) error// DeleteStorage removes any existing value for key from the trie. If a node// was not found in the database, a trie.MissingNodeError is returned.DeleteStorage(addr common.Address, key []byte) error// UpdateContractCode abstracts code write to the trie. It is expected// to be moved to the stateWriter interface when the latter is ready.UpdateContractCode(address common.Address, codeHash common.Hash, code []byte) error// Hash returns the root hash of the trie. It does not write to the database and// can be used even if the trie doesn't have one.Hash() common.Hash// Commit collects all dirty nodes in the trie and replace them with the// corresponding node hash. All collected nodes(including dirty leaves if// collectLeaf is true) will be encapsulated into a nodeset for return.// The returned nodeset can be nil if the trie is clean(nothing to commit).// Once the trie is committed, it's not usable anymore. A new trie must// be created with new root and updated trie database for following usageCommit(collectLeaf bool) (common.Hash, *trienode.NodeSet)// Witness returns a set containing all trie nodes that have been accessed.// The returned map could be nil if the witness is empty.Witness() map[string][]byte// NodeIterator returns an iterator that returns nodes of the trie. Iteration// starts at the key after the given start key. And error will be returned// if fails to create node iterator.NodeIterator(startKey []byte) (trie.NodeIterator, error)// Prove constructs a Merkle proof for key. The result contains all encoded nodes// on the path to the value at key. The value itself is also included in the last// node and can be retrieved by verifying the proof.//// If the trie does not contain a value for key, the returned proof contains all// nodes of the longest existing prefix of the key (at least the root), ending// with the node that proves the absence of the key.Prove(key []byte, proofDb ethdb.KeyValueWriter) error// IsVerkle returns true if the trie is verkle-tree basedIsVerkle() bool
}

stateObject:单个帐户的状态,其中StateAccount包含帐户信息,如Nonce, Balance,Root, CodeHash

stateObject
StateAccount
type stateObject struct {db       *StateDBaddress  common.Address      // address of ethereum accountaddrHash common.Hash         // hash of ethereum address of the accountorigin   *types.StateAccount // Account original data without any change applied, nil means it was not existentdata     types.StateAccount  // Account data with all mutations applied in the scope of block// Write caches.trie Trie   // storage trie, which becomes non-nil on first accesscode []byte // contract bytecode, which gets set when code is loadedoriginStorage  Storage // Storage entries that have been accessed within the current blockdirtyStorage   Storage // Storage entries that have been modified within the current transactionpendingStorage Storage // Storage entries that have been modified within the current block// uncommittedStorage tracks a set of storage entries that have been modified// but not yet committed since the "last commit operation", along with their// original values before mutation.//// Specifically, the commit will be performed after each transaction before// the byzantium fork, therefore the map is already reset at the transaction// boundary; however post the byzantium fork, the commit will only be performed// at the end of block, this set essentially tracks all the modifications// made within the block.uncommittedStorage Storage// Cache flags.dirtyCode bool // true if the code was updated// Flag whether the account was marked as self-destructed. The self-destructed// account is still accessible in the scope of same transaction.selfDestructed bool// This is an EIP-6780 flag indicating whether the object is eligible for// self-destruct according to EIP-6780. The flag could be set either when// the contract is just created within the current transaction, or when the// object was previously existent and is being deployed as a contract within// the current transaction.newContract bool
}type StateAccount struct {Nonce    uint64Balance  *uint256.IntRoot     common.Hash // merkle root of the storage trieCodeHash []byte
}

主要操作

提交Commit/CommitWithUpdate

Commit:将状态变动写入存储中,一旦状态提交,缓存的trie就不起作用,新的状态需要 使用新的根和更新的数据库来创建

clientStateDBTriestateObjectBatchersnapshot.Treetrie.Database.TreeCommitcommitAndFlushcommitIntermediateRootcommitcommitreturn retwriteopt[db不等于nil并且ret.codes不等于0]Updateopt[ret不为空]Updateopt[triedb不为空]clientStateDBTriestateObjectBatchersnapshot.Treetrie.Database.Tree
func (s *StateDB) Commit(block uint64, deleteEmptyObjects bool, noStorageWiping bool) (common.Hash, error) {ret, err := s.commitAndFlush(block, deleteEmptyObjects, noStorageWiping)if err != nil {return common.Hash{}, err}return ret.root, nil
}func (s *StateDB) commitAndFlush(block uint64, deleteEmptyObjects bool, noStorageWiping bool) (*stateUpdate, error) {ret, err := s.commit(deleteEmptyObjects, noStorageWiping, block)if err != nil {return nil, err}// Commit dirty contract code if any existsif db := s.db.TrieDB().Disk(); db != nil && len(ret.codes) > 0 {batch := db.NewBatch()for _, code := range ret.codes {rawdb.WriteCode(batch, code.hash, code.blob)}if err := batch.Write(); err != nil {return nil, err}}if !ret.empty() {// If snapshotting is enabled, update the snapshot tree with this new versionif snap := s.db.Snapshot(); snap != nil && snap.Snapshot(ret.originRoot) != nil {start := time.Now()if err := snap.Update(ret.root, ret.originRoot, ret.accounts, ret.storages); err != nil {log.Warn("Failed to update snapshot tree", "from", ret.originRoot, "to", ret.root, "err", err)}// Keep 128 diff layers in the memory, persistent layer is 129th.// - head layer is paired with HEAD state// - head-1 layer is paired with HEAD-1 state// - head-127 layer(bottom-most diff layer) is paired with HEAD-127 stateif err := snap.Cap(ret.root, TriesInMemory); err != nil {log.Warn("Failed to cap snapshot tree", "root", ret.root, "layers", TriesInMemory, "err", err)}s.SnapshotCommits += time.Since(start)}// If trie database is enabled, commit the state update as a new layerif db := s.db.TrieDB(); db != nil {start := time.Now()if err := db.Update(ret.root, ret.originRoot, block, ret.nodes, ret.stateSet()); err != nil {return nil, err}s.TrieDBCommits += time.Since(start)}}s.reader, _ = s.db.Reader(s.originalRoot)return ret, err
}func (s *StateDB) commit(deleteEmptyObjects bool, noStorageWiping bool, blockNumber uint64) (*stateUpdate, error) {// Short circuit in case any database failure occurred earlier.if s.dbErr != nil {return nil, fmt.Errorf("commit aborted due to earlier error: %v", s.dbErr)}// Finalize any pending changes and merge everything into the triess.IntermediateRoot(deleteEmptyObjects)// Short circuit if any error occurs within the IntermediateRoot.if s.dbErr != nil {return nil, fmt.Errorf("commit aborted due to database error: %v", s.dbErr)}// Commit objects to the trie, measuring the elapsed timevar (accountTrieNodesUpdated intaccountTrieNodesDeleted intstorageTrieNodesUpdated intstorageTrieNodesDeleted intlock    sync.Mutex                                               // protect two maps belownodes   = trienode.NewMergedNodeSet()                            // aggregated trie nodesupdates = make(map[common.Hash]*accountUpdate, len(s.mutations)) // aggregated account updates// merge aggregates the dirty trie nodes into the global set.//// Given that some accounts may be destroyed and then recreated within// the same block, it's possible that a node set with the same owner// may already exist. In such cases, these two sets are combined, with// the later one overwriting the previous one if any nodes are modified// or deleted in both sets.//// merge run concurrently across  all the state objects and account trie.merge = func(set *trienode.NodeSet) error {if set == nil {return nil}lock.Lock()defer lock.Unlock()updates, deletes := set.Size()if set.Owner == (common.Hash{}) {accountTrieNodesUpdated += updatesaccountTrieNodesDeleted += deletes} else {storageTrieNodesUpdated += updatesstorageTrieNodesDeleted += deletes}return nodes.Merge(set)})// Given that some accounts could be destroyed and then recreated within// the same block, account deletions must be processed first. This ensures// that the storage trie nodes deleted during destruction and recreated// during subsequent resurrection can be combined correctly.deletes, delNodes, err := s.handleDestruction(noStorageWiping)if err != nil {return nil, err}for _, set := range delNodes {if err := merge(set); err != nil {return nil, err}}// Handle all state updates afterwards, concurrently to one another to shave// off some milliseconds from the commit operation. Also accumulate the code// writes to run in parallel with the computations.var (start   = time.Now()root    common.Hashworkers errgroup.Group)// Schedule the account trie first since that will be the biggest, so give// it the most time to crunch.//// TODO(karalabe): This account trie commit is *very* heavy. 5-6ms at chain// heads, which seems excessive given that it doesn't do hashing, it just// shuffles some data. For comparison, the *hashing* at chain head is 2-3ms.// We need to investigate what's happening as it seems something's wonky.// Obviously it's not an end of the world issue, just something the original// code didn't anticipate for.workers.Go(func() error {// Write the account trie changes, measuring the amount of wasted timenewroot, set := s.trie.Commit(true)root = newrootif err := merge(set); err != nil {return err}s.AccountCommits = time.Since(start)return nil})// Schedule each of the storage tries that need to be updated, so they can// run concurrently to one another.//// TODO(karalabe): Experimentally, the account commit takes approximately the// same time as all the storage commits combined, so we could maybe only have// 2 threads in total. But that kind of depends on the account commit being// more expensive than it should be, so let's fix that and revisit this todo.for addr, op := range s.mutations {if op.isDelete() {continue}// Write any contract code associated with the state objectobj := s.stateObjects[addr]if obj == nil {return nil, errors.New("missing state object")}// Run the storage updates concurrently to one anotherworkers.Go(func() error {// Write any storage changes in the state object to its storage trieupdate, set, err := obj.commit()if err != nil {return err}if err := merge(set); err != nil {return err}lock.Lock()updates[obj.addrHash] = updates.StorageCommits = time.Since(start) // overwrite with the longest storage commit runtimelock.Unlock()return nil})}// Wait for everything to finish and update the metricsif err := workers.Wait(); err != nil {return nil, err}accountReadMeters.Mark(int64(s.AccountLoaded))storageReadMeters.Mark(int64(s.StorageLoaded))accountUpdatedMeter.Mark(int64(s.AccountUpdated))storageUpdatedMeter.Mark(s.StorageUpdated.Load())accountDeletedMeter.Mark(int64(s.AccountDeleted))storageDeletedMeter.Mark(s.StorageDeleted.Load())accountTrieUpdatedMeter.Mark(int64(accountTrieNodesUpdated))accountTrieDeletedMeter.Mark(int64(accountTrieNodesDeleted))storageTriesUpdatedMeter.Mark(int64(storageTrieNodesUpdated))storageTriesDeletedMeter.Mark(int64(storageTrieNodesDeleted))// Clear the metric markerss.AccountLoaded, s.AccountUpdated, s.AccountDeleted = 0, 0, 0s.StorageLoaded = 0s.StorageUpdated.Store(0)s.StorageDeleted.Store(0)// Clear all internal flags and update state root at the end.s.mutations = make(map[common.Address]*mutation)s.stateObjectsDestruct = make(map[common.Address]*stateObject)origin := s.originalRoots.originalRoot = rootreturn newStateUpdate(noStorageWiping, origin, root, blockNumber, deletes, updates, nodes), nil
}

计算中间根IntermediateRoot

IntermediateRoot:主要是根据变动中的单个帐户更新trie,然后根据变动中的删除、更新操作修改全局状态trie,同时 计算变更的的根

func (s *StateDB) IntermediateRoot(deleteEmptyObjects bool) common.Hash {// Finalise all the dirty storage states and write them into the triess.Finalise(deleteEmptyObjects)// Initialize the trie if it's not constructed yet. If the prefetch// is enabled, the trie constructed below will be replaced by the// prefetched one.//// This operation must be done before state object storage hashing,// as it assumes the main trie is already loaded.if s.trie == nil {tr, err := s.db.OpenTrie(s.originalRoot)if err != nil {s.setError(err)return common.Hash{}}s.trie = tr}// If there was a trie prefetcher operating, terminate it async so that the// individual storage tries can be updated as soon as the disk load finishes.if s.prefetcher != nil {s.prefetcher.terminate(true)defer func() {s.prefetcher.report()s.prefetcher = nil // Pre-byzantium, unset any used up prefetcher}()}// Process all storage updates concurrently. The state object update root// method will internally call a blocking trie fetch from the prefetcher,// so there's no need to explicitly wait for the prefetchers to finish.var (start   = time.Now()workers errgroup.Group)if s.db.TrieDB().IsVerkle() {// Whilst MPT storage tries are independent, Verkle has one single trie// for all the accounts and all the storage slots merged together. The// former can thus be simply parallelized, but updating the latter will// need concurrency support within the trie itself. That's a TODO for a// later time.workers.SetLimit(1)}for addr, op := range s.mutations {if op.applied || op.isDelete() {continue}obj := s.stateObjects[addr] // closure for the task runner belowworkers.Go(func() error {if s.db.TrieDB().IsVerkle() {obj.updateTrie()} else {obj.updateRoot()// If witness building is enabled and the state object has a trie,// gather the witnesses for its specific storage trieif s.witness != nil && obj.trie != nil {s.witness.AddState(obj.trie.Witness())}}return nil})}// If witness building is enabled, gather all the read-only accesses.// Skip witness collection in Verkle mode, they will be gathered// together at the end.if s.witness != nil && !s.db.TrieDB().IsVerkle() {// Pull in anything that has been accessed before destructionfor _, obj := range s.stateObjectsDestruct {// Skip any objects that haven't touched their storageif len(obj.originStorage) == 0 {continue}if trie := obj.getPrefetchedTrie(); trie != nil {witness := trie.Witness()s.witness.AddState(witness)if s.witnessStats != nil {s.witnessStats.Add(witness, obj.addrHash)}} else if obj.trie != nil {witness := obj.trie.Witness()s.witness.AddState(witness)if s.witnessStats != nil {s.witnessStats.Add(witness, obj.addrHash)}}}// Pull in only-read and non-destructed trie witnessesfor _, obj := range s.stateObjects {// Skip any objects that have been updatedif _, ok := s.mutations[obj.address]; ok {continue}// Skip any objects that haven't touched their storageif len(obj.originStorage) == 0 {continue}if trie := obj.getPrefetchedTrie(); trie != nil {witness := trie.Witness()s.witness.AddState(witness)if s.witnessStats != nil {s.witnessStats.Add(witness, obj.addrHash)}} else if obj.trie != nil {witness := obj.trie.Witness()s.witness.AddState(witness)if s.witnessStats != nil {s.witnessStats.Add(witness, obj.addrHash)}}}}workers.Wait()s.StorageUpdates += time.Since(start)// Now we're about to start to write changes to the trie. The trie is so far// _untouched_. We can check with the prefetcher, if it can give us a trie// which has the same root, but also has some content loaded into it.//// Don't check prefetcher if verkle trie has been used. In the context of verkle,// only a single trie is used for state hashing. Replacing a non-nil verkle tree// here could result in losing uncommitted changes from storage.start = time.Now()if s.prefetcher != nil {if trie := s.prefetcher.trie(common.Hash{}, s.originalRoot); trie == nil {log.Error("Failed to retrieve account pre-fetcher trie")} else {s.trie = trie}}// Perform updates before deletions.  This prevents resolution of unnecessary trie nodes// in circumstances similar to the following://// Consider nodes `A` and `B` who share the same full node parent `P` and have no other siblings.// During the execution of a block:// - `A` self-destructs,// - `C` is created, and also shares the parent `P`.// If the self-destruct is handled first, then `P` would be left with only one child, thus collapsed// into a shortnode. This requires `B` to be resolved from disk.// Whereas if the created node is handled first, then the collapse is avoided, and `B` is not resolved.var (usedAddrs    []common.AddressdeletedAddrs []common.Address)for addr, op := range s.mutations {if op.applied {continue}op.applied = trueif op.isDelete() {deletedAddrs = append(deletedAddrs, addr)} else {s.updateStateObject(s.stateObjects[addr])s.AccountUpdated += 1}usedAddrs = append(usedAddrs, addr) // Copy needed for closure}for _, deletedAddr := range deletedAddrs {s.deleteStateObject(deletedAddr)s.AccountDeleted += 1}s.AccountUpdates += time.Since(start)if s.prefetcher != nil {s.prefetcher.used(common.Hash{}, s.originalRoot, usedAddrs, nil)}// Track the amount of time wasted on hashing the account triedefer func(start time.Time) { s.AccountHashes += time.Since(start) }(time.Now())hash := s.trie.Hash()// If witness building is enabled, gather the account trie witnessif s.witness != nil {witness := s.trie.Witness()s.witness.AddState(witness)if s.witnessStats != nil {s.witnessStats.Add(witness, common.Hash{})}}return hash
}

销毁对象Finalise

删除待销毁的对象,同时清除日志以及退款,只是作对象的删除和更新标志,不会将修改提交到trie中

func (s *StateDB) Finalise(deleteEmptyObjects bool) {addressesToPrefetch := make([]common.Address, 0, len(s.journal.dirties))for addr := range s.journal.dirties {obj, exist := s.stateObjects[addr]if !exist {// ripeMD is 'touched' at block 1714175, in tx 0x1237f737031e40bcde4a8b7e717b2d15e3ecadfe49bb1bbc71ee9deb09c6fcf2// That tx goes out of gas, and although the notion of 'touched' does not exist there, the// touch-event will still be recorded in the journal. Since ripeMD is a special snowflake,// it will persist in the journal even though the journal is reverted. In this special circumstance,// it may exist in `s.journal.dirties` but not in `s.stateObjects`.// Thus, we can safely ignore it herecontinue}if obj.selfDestructed || (deleteEmptyObjects && obj.empty()) {delete(s.stateObjects, obj.address)s.markDelete(addr)// We need to maintain account deletions explicitly (will remain// set indefinitely). Note only the first occurred self-destruct// event is tracked.if _, ok := s.stateObjectsDestruct[obj.address]; !ok {s.stateObjectsDestruct[obj.address] = obj}} else {obj.finalise()s.markUpdate(addr)}// At this point, also ship the address off to the precacher. The precacher// will start loading tries, and when the change is eventually committed,// the commit-phase will be a lot fasteraddressesToPrefetch = append(addressesToPrefetch, addr) // Copy needed for closure}if s.prefetcher != nil && len(addressesToPrefetch) > 0 {if err := s.prefetcher.prefetch(common.Hash{}, s.originalRoot, common.Address{}, addressesToPrefetch, nil, false); err != nil {log.Error("Failed to prefetch addresses", "addresses", len(addressesToPrefetch), "err", err)}}// Invalidate journal because reverting across transactions is not allowed.s.clearJournalAndRefund()
}func (s *StateDB) clearJournalAndRefund() {s.journal.reset()s.refund = 0
}
http://www.dtcms.com/a/529639.html

相关文章:

  • [人工智能-大模型-76]:模型层技术 - 模型训练六大步:⑤反向传播,计算迭代梯度,找出偏差的原因 - 基本功能与对应的基本组成函数
  • 代码随想录Day58|拓扑排序精讲、dijkstra(朴素版)精讲
  • 基于多焦点高斯邻域注意力机制与大规模基准的视频人群定位
  • 乐清网站制作公司招聘郫县建设局网站
  • 第二次作业-第二章的时间服务
  • 广州网站制作开发公司哪家好游戏开发物语下载
  • 电机试验平台的基本组成
  • 简单风景网站模版旅游网站如何做推广
  • 天津市城市建设学校官方网站教用vs2013做网站的书
  • 最好的域名注册网站jexus wordpress
  • Dataflare:一款简单易用的数据库管理工具
  • When NOMA Meets AIGC: Enhanced WirelessFederated Learning
  • 能源网站建设wordpress开头空两格
  • 网站建设不一定当地最新黑帽seo培训
  • 【ShardingSphere5】实战教程(快速入门掌握核心)
  • 网站上线后做什么ssh做网站步骤
  • 多线程之阻塞队列
  • NOR Flash,25Q系列,25Q80,25Q16,25Q32,对标普冉,兆易,恒硕,华邦等,低功耗SPI NOR,闪存芯片
  • WordPress怎么建小站单页营销网站
  • 笔记【字符串及相关操作】
  • 网站建设 探索扬州建网站
  • 合肥网站制作建设中企动力公司是国企吗
  • 捷信做单网站手工制作帽子 小学生
  • H5页面获取定位一直显示加载中
  • 女生做网站推广wordpress死链删除
  • 数据结构--并查集
  • 简约创意网页设计seo还能赚钱吗
  • 本地用织梦做网站免费的wordpress企业模板
  • 苏州建设工程招标官方网站html网页制作心得体会
  • 公司做卖网站有前景吗域名对网站排名的影响